当前位置: 首页>后端>正文

ansible中的tags ansible中的魔法变量

目录

  • ansible 变量fact && 魔法变量 && lookup生成变量
  • fact变量
  • fact简介
  • 手动设置fact
  • 使用set_fact模块定义新的变量
  • 手动采集fact
  • 启用fact缓存
  • Json文件fact缓存后端
  • Redis fact缓存后端
  • Memcached fact缓存后端
  • 魔法变量
  • hostvars
  • inventory_hostname
  • group_names
  • groups
  • 其他变量
  • 变量优先级
  • 使用lookup生成变量
  • 简单说明
  • file
  • pipe
  • env
  • csvfile
  • redis_kv
  • etcd
  • password
  • dnstxt
  • 文章转自

ansible 变量fact && 魔法变量 && lookup生成变量

fact变量

fact简介

ansible有一个模块叫setup,用于获取远程主机的相关信息,并可以将这些信息作为变量在playbook里进行调用。而setup模块获取这些信息的方法就是依赖于fact。

[root@node1 ansible]# ansible demo2.example.com -m setup

demo2.example.com | SUCCESS => {
    "ansible_facts": {
        "ansible_all_ipv4_addresses": [
            "192.168.132.132"
        ], 
        "ansible_all_ipv6_addresses": [
            "fe80::6a92:62ba:1b33:c93d"
        ], 
        "ansible_apparmor": {
            "status": "disabled"
        }, 
        "ansible_architecture": "x86_64", 
        "ansible_bios_date": "04/13/2018", 
        "ansible_bios_version": "6.00", 
        "ansible_cmdline": {
            "BOOT_IMAGE": "/vmlinuz-3.10.0-957.27.2.el7.x86_64", 
            "LANG": "en_US.UTF-8", 
            "crashkernel": "auto", 
            "quiet": true, 
            "rd.lvm.lv": "centos/swap", 
            "rhgb": true, 
            "ro": true, 
            "root": "/dev/mapper/centos-root"
        }, 
        "ansible_date_time": {
            "date": "2020-04-30", 
            "day": "30", 
            "epoch": "1588215058", 
            "hour": "10", 
            "iso8601": "2020-04-30T02:50:58Z", 
            "iso8601_basic": "20200430T105058313014", 
            "iso8601_basic_short": "20200430T105058", 
            "iso8601_micro": "2020-04-30T02:50:58.313109Z", 
            "minute": "50", 
            "month": "04", 
            "second": "58", 
            "time": "10:50:58", 
            "tz": "CST", 
            "tz_offset": "+0800", 
            "weekday": "Thursday", 
            "weekday_number": "4", 
            "weeknumber": "17", 
            "year": "2020"
        }, 
        "ansible_default_ipv4": {
            "address": "192.168.132.132", 
            "alias": "ens33", 
            "broadcast": "192.168.132.255", 
            "gateway": "192.168.132.2", 
            "interface": "ens33", 
            "macaddress": "00:0c:29:63:fd:11", 
            "mtu": 1500, 
            "netmask": "255.255.255.0", 
            "network": "192.168.132.0", 
            "type": "ether"
        }, 
        "ansible_default_ipv6": {}, 
        "ansible_device_links": {
            "ids": {
                "dm-0": [
                    "dm-name-centos-root", 
                    "dm-uuid-LVM-DOPPciSmMTXQSQJvRdSbWJBmuhdeD9S0AS74ZZdHMVJnlz1YUctYK3lKDmZnrRhM"
                ], 
                "dm-1": [
                    "dm-name-centos-swap", 
                    "dm-uuid-LVM-DOPPciSmMTXQSQJvRdSbWJBmuhdeD9S04iSoOFGXPLXY22VMgaQAGIOSYqrk66ql"
                ], 
                "sda2": [
                    "lvm-pv-uuid-bvlLLc-vsnl-w4tp-yxSU-WAvm-0OzC-plOsdY"
                ], 
                "sr0": [
                    "ata-VMware_Virtual_IDE_CDROM_Drive_10000000000000000001"
                ]
            }, 
            "labels": {}, 
            "masters": {
                "sda2": [
                    "dm-0", 
                    "dm-1"
                ]
            }, 
            "uuids": {
                "dm-0": [
                    "ef36deb2-4eae-40f6-824c-fb1ac0d1e009"
                ], 
                "dm-1": [
                    "1d0baec2-b571-4581-acbc-4462a915c751"
                ], 
                "sda1": [
                    "2c23c58e-cec4-4606-8210-ae4e5ec62133"
                ]
            }
        }, 
        "ansible_devices": {
            "dm-0": {
                "holders": [], 
                "host": "", 
                "links": {
                    "ids": [
                        "dm-name-centos-root", 
                        "dm-uuid-LVM-DOPPciSmMTXQSQJvRdSbWJBmuhdeD9S0AS74ZZdHMVJnlz1YUctYK3lKDmZnrRhM"
                    ], 
                    "labels": [], 
                    "masters": [], 
                    "uuids": [
                        "ef36deb2-4eae-40f6-824c-fb1ac0d1e009"
                    ]
                }, 
                "model": null, 
                "partitions": {}, 
                "removable": "0", 
                "rotational": "1", 
                "sas_address": null, 
                "sas_device_handle": null, 
                "scheduler_mode": "", 
                "sectors": "98549760", 
                "sectorsize": "512", 
                "size": "46.99 GB", 
                "support_discard": "0", 
                "vendor": null, 
                "virtual": 1
            }, 
            "dm-1": {
                "holders": [], 
                "host": "", 
                "links": {
                    "ids": [
                        "dm-name-centos-swap", 
                        "dm-uuid-LVM-DOPPciSmMTXQSQJvRdSbWJBmuhdeD9S04iSoOFGXPLXY22VMgaQAGIOSYqrk66ql"
                    ], 
                    "labels": [], 
                    "masters": [], 
                    "uuids": [
                        "1d0baec2-b571-4581-acbc-4462a915c751"
                    ]
                }, 
                "model": null, 
                "partitions": {}, 
                "removable": "0", 
                "rotational": "1", 
                "sas_address": null, 
                "sas_device_handle": null, 
                "scheduler_mode": "", 
                "sectors": "4194304", 
                "sectorsize": "512", 
                "size": "2.00 GB", 
                "support_discard": "0", 
                "vendor": null, 
                "virtual": 1
            }, 
            "sda": {
                "holders": [], 
                "host": "", 
                "links": {
                    "ids": [], 
                    "labels": [], 
                    "masters": [], 
                    "uuids": []
                }, 
                "model": "VMware Virtual S", 
                "partitions": {
                    "sda1": {
                        "holders": [], 
                        "links": {
                            "ids": [], 
                            "labels": [], 
                            "masters": [], 
                            "uuids": [
                                "2c23c58e-cec4-4606-8210-ae4e5ec62133"
                            ]
                        }, 
                        "sectors": "2097152", 
                        "sectorsize": 512, 
                        "size": "1.00 GB", 
                        "start": "2048", 
                        "uuid": "2c23c58e-cec4-4606-8210-ae4e5ec62133"
                    }, 
                    "sda2": {
                        "holders": [
                            "centos-root", 
                            "centos-swap"
                        ], 
                        "links": {
                            "ids": [
                                "lvm-pv-uuid-bvlLLc-vsnl-w4tp-yxSU-WAvm-0OzC-plOsdY"
                            ], 
                            "labels": [], 
                            "masters": [
                                "dm-0", 
                                "dm-1"
                            ], 
                            "uuids": []
                        }, 
                        "sectors": "102758400", 
                        "sectorsize": 512, 
                        "size": "49.00 GB", 
                        "start": "2099200", 
                        "uuid": null
                    }
                }, 
                "removable": "0", 
                "rotational": "1", 
                "sas_address": null, 
                "sas_device_handle": null, 
                "scheduler_mode": "deadline", 
                "sectors": "104857600", 
                "sectorsize": "512", 
                "size": "50.00 GB", 
                "support_discard": "0", 
                "vendor": "VMware,", 
                "virtual": 1
            }, 
            "sr0": {
                "holders": [], 
                "host": "", 
                "links": {
                    "ids": [
                        "ata-VMware_Virtual_IDE_CDROM_Drive_10000000000000000001"
                    ], 
                    "labels": [], 
                    "masters": [], 
                    "uuids": []
                }, 
                "model": "VMware IDE CDR10", 
                "partitions": {}, 
                "removable": "1", 
                "rotational": "1", 
                "sas_address": null, 
                "sas_device_handle": null, 
                "scheduler_mode": "deadline", 
                "sectors": "2097151", 
                "sectorsize": "512", 
                "size": "1024.00 MB", 
                "support_discard": "0", 
                "vendor": "NECVMWar", 
                "virtual": 1
            }
        }, 
        "ansible_distribution": "CentOS", 
        "ansible_distribution_file_parsed": true, 
        "ansible_distribution_file_path": "/etc/redhat-release", 
        "ansible_distribution_file_variety": "RedHat", 
        "ansible_distribution_major_version": "7", 
        "ansible_distribution_release": "Core", 
        "ansible_distribution_version": "7.7", 
        "ansible_dns": {
            "nameservers": [
                "8.8.8.8"
            ]
        }, 
        "ansible_domain": "", 
        "ansible_effective_group_id": 0, 
        "ansible_effective_user_id": 0, 
        "ansible_ens33": {
            "active": true, 
            "device": "ens33", 
            "features": {
                "busy_poll": "off [fixed]", 
                "fcoe_mtu": "off [fixed]", 
                "generic_receive_offload": "on", 
                "generic_segmentation_offload": "on", 
                "highdma": "off [fixed]", 
                "hw_tc_offload": "off [fixed]", 
                "l2_fwd_offload": "off [fixed]", 
                "large_receive_offload": "off [fixed]", 
                "loopback": "off [fixed]", 
                "netns_local": "off [fixed]", 
                "ntuple_filters": "off [fixed]", 
                "receive_hashing": "off [fixed]", 
                "rx_all": "off", 
                "rx_checksumming": "off", 
                "rx_fcs": "off", 
                "rx_gro_hw": "off [fixed]", 
                "rx_udp_tunnel_port_offload": "off [fixed]", 
                "rx_vlan_filter": "on [fixed]", 
                "rx_vlan_offload": "on", 
                "rx_vlan_stag_filter": "off [fixed]", 
                "rx_vlan_stag_hw_parse": "off [fixed]", 
                "scatter_gather": "on", 
                "tcp_segmentation_offload": "on", 
                "tx_checksum_fcoe_crc": "off [fixed]", 
                "tx_checksum_ip_generic": "on", 
                "tx_checksum_ipv4": "off [fixed]", 
                "tx_checksum_ipv6": "off [fixed]", 
                "tx_checksum_sctp": "off [fixed]", 
                "tx_checksumming": "on", 
                "tx_fcoe_segmentation": "off [fixed]", 
                "tx_gre_csum_segmentation": "off [fixed]", 
                "tx_gre_segmentation": "off [fixed]", 
                "tx_gso_partial": "off [fixed]", 
                "tx_gso_robust": "off [fixed]", 
                "tx_ipip_segmentation": "off [fixed]", 
                "tx_lockless": "off [fixed]", 
                "tx_nocache_copy": "off", 
                "tx_scatter_gather": "on", 
                "tx_scatter_gather_fraglist": "off [fixed]", 
                "tx_sctp_segmentation": "off [fixed]", 
                "tx_sit_segmentation": "off [fixed]", 
                "tx_tcp6_segmentation": "off [fixed]", 
                "tx_tcp_ecn_segmentation": "off [fixed]", 
                "tx_tcp_mangleid_segmentation": "off", 
                "tx_tcp_segmentation": "on", 
                "tx_udp_tnl_csum_segmentation": "off [fixed]", 
                "tx_udp_tnl_segmentation": "off [fixed]", 
                "tx_vlan_offload": "on [fixed]", 
                "tx_vlan_stag_hw_insert": "off [fixed]", 
                "udp_fragmentation_offload": "off [fixed]", 
                "vlan_challenged": "off [fixed]"
            }, 
            "hw_timestamp_filters": [], 
            "ipv4": {
                "address": "192.168.132.132", 
                "broadcast": "192.168.132.255", 
                "netmask": "255.255.255.0", 
                "network": "192.168.132.0"
            }, 
            "ipv6": [
                {
                    "address": "fe80::6a92:62ba:1b33:c93d", 
                    "prefix": "64", 
                    "scope": "link"
                }
            ], 
            "macaddress": "00:0c:29:63:fd:11", 
            "module": "e1000", 
            "mtu": 1500, 
            "pciid": "0000:02:01.0", 
            "promisc": false, 
            "speed": 1000, 
            "timestamping": [
                "tx_software", 
                "rx_software", 
                "software"
            ], 
            "type": "ether"
        }, 
        "ansible_env": {
            "HOME": "/root", 
            "LANG": "C", 
            "LC_ALL": "C", 
            "LC_NUMERIC": "C", 
            "LOGNAME": "root", 
            "LS_COLORS": "rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:", 
            "MAIL": "/var/mail/ansible", 
            "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", 
            "PWD": "/home/ansible", 
            "SHELL": "/bin/bash", 
            "SHLVL": "1", 
            "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-kalsgttbdyavwerspucrqragxtbzcurx ; /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1588215056.77-165714705580753/AnsiballZ_setup.py", 
            "SUDO_GID": "1001", 
            "SUDO_UID": "1001", 
            "SUDO_USER": "ansible", 
            "TERM": "xterm", 
            "USER": "root", 
            "USERNAME": "root", 
            "XDG_SESSION_ID": "1914", 
            "_": "/usr/bin/python"
        }, 
        "ansible_fibre_channel_wwn": [], 
        "ansible_fips": false, 
        "ansible_form_factor": "Other", 
        "ansible_fqdn": "node2", 
        "ansible_hostname": "node2", 
        "ansible_hostnqn": "", 
        "ansible_interfaces": [
            "lo", 
            "ens33"
        ], 
        "ansible_is_chroot": false, 
        "ansible_iscsi_iqn": "", 
        "ansible_kernel": "3.10.0-957.27.2.el7.x86_64", 
        "ansible_kernel_version": "#1 SMP Mon Jul 29 17:46:05 UTC 2019", 
        "ansible_lo": {
            "active": true, 
            "device": "lo", 
            "features": {
                "busy_poll": "off [fixed]", 
                "fcoe_mtu": "off [fixed]", 
                "generic_receive_offload": "on", 
                "generic_segmentation_offload": "on", 
                "highdma": "on [fixed]", 
                "hw_tc_offload": "off [fixed]", 
                "l2_fwd_offload": "off [fixed]", 
                "large_receive_offload": "off [fixed]", 
                "loopback": "on [fixed]", 
                "netns_local": "on [fixed]", 
                "ntuple_filters": "off [fixed]", 
                "receive_hashing": "off [fixed]", 
                "rx_all": "off [fixed]", 
                "rx_checksumming": "on [fixed]", 
                "rx_fcs": "off [fixed]", 
                "rx_gro_hw": "off [fixed]", 
                "rx_udp_tunnel_port_offload": "off [fixed]", 
                "rx_vlan_filter": "off [fixed]", 
                "rx_vlan_offload": "off [fixed]", 
                "rx_vlan_stag_filter": "off [fixed]", 
                "rx_vlan_stag_hw_parse": "off [fixed]", 
                "scatter_gather": "on", 
                "tcp_segmentation_offload": "on", 
                "tx_checksum_fcoe_crc": "off [fixed]", 
                "tx_checksum_ip_generic": "on [fixed]", 
                "tx_checksum_ipv4": "off [fixed]", 
                "tx_checksum_ipv6": "off [fixed]", 
                "tx_checksum_sctp": "on [fixed]", 
                "tx_checksumming": "on", 
                "tx_fcoe_segmentation": "off [fixed]", 
                "tx_gre_csum_segmentation": "off [fixed]", 
                "tx_gre_segmentation": "off [fixed]", 
                "tx_gso_partial": "off [fixed]", 
                "tx_gso_robust": "off [fixed]", 
                "tx_ipip_segmentation": "off [fixed]", 
                "tx_lockless": "on [fixed]", 
                "tx_nocache_copy": "off [fixed]", 
                "tx_scatter_gather": "on [fixed]", 
                "tx_scatter_gather_fraglist": "on [fixed]", 
                "tx_sctp_segmentation": "on", 
                "tx_sit_segmentation": "off [fixed]", 
                "tx_tcp6_segmentation": "on", 
                "tx_tcp_ecn_segmentation": "on", 
                "tx_tcp_mangleid_segmentation": "on", 
                "tx_tcp_segmentation": "on", 
                "tx_udp_tnl_csum_segmentation": "off [fixed]", 
                "tx_udp_tnl_segmentation": "off [fixed]", 
                "tx_vlan_offload": "off [fixed]", 
                "tx_vlan_stag_hw_insert": "off [fixed]", 
                "udp_fragmentation_offload": "on", 
                "vlan_challenged": "on [fixed]"
            }, 
            "hw_timestamp_filters": [], 
            "ipv4": {
                "address": "127.0.0.1", 
                "broadcast": "host", 
                "netmask": "255.0.0.0", 
                "network": "127.0.0.0"
            }, 
            "ipv6": [
                {
                    "address": "::1", 
                    "prefix": "128", 
                    "scope": "host"
                }
            ], 
            "mtu": 65536, 
            "promisc": false, 
            "timestamping": [
                "rx_software", 
                "software"
            ], 
            "type": "loopback"
        }, 
        "ansible_local": {}, 
        "ansible_lsb": {}, 
        "ansible_lvm": {
            "lvs": {
                "root": {
                    "size_g": "46.99", 
                    "vg": "centos"
                }, 
                "swap": {
                    "size_g": "2.00", 
                    "vg": "centos"
                }
            }, 
            "pvs": {
                "/dev/sda2": {
                    "free_g": "0.00", 
                    "size_g": "49.00", 
                    "vg": "centos"
                }
            }, 
            "vgs": {
                "centos": {
                    "free_g": "0.00", 
                    "num_lvs": "2", 
                    "num_pvs": "1", 
                    "size_g": "49.00"
                }
            }
        }, 
        "ansible_machine": "x86_64", 
        "ansible_machine_id": "817ad910bace4109bda4f5dc5c709092", 
        "ansible_memfree_mb": 299, 
        "ansible_memory_mb": {
            "nocache": {
                "free": 1455, 
                "used": 364
            }, 
            "real": {
                "free": 299, 
                "total": 1819, 
                "used": 1520
            }, 
            "swap": {
                "cached": 0, 
                "free": 2046, 
                "total": 2047, 
                "used": 1
            }
        }, 
        "ansible_memtotal_mb": 1819, 
        "ansible_mounts": [
            {
                "block_available": 205371, 
                "block_size": 4096, 
                "block_total": 259584, 
                "block_used": 54213, 
                "device": "/dev/sda1", 
                "fstype": "xfs", 
                "inode_available": 523945, 
                "inode_total": 524288, 
                "inode_used": 343, 
                "mount": "/boot", 
                "options": "rw,relatime,attr2,inode64,noquota", 
                "size_available": 841199616, 
                "size_total": 1063256064, 
                "uuid": "2c23c58e-cec4-4606-8210-ae4e5ec62133"
            }, 
            {
                "block_available": 11751659, 
                "block_size": 4096, 
                "block_total": 12312705, 
                "block_used": 561046, 
                "device": "/dev/mapper/centos-root", 
                "fstype": "xfs", 
                "inode_available": 24571916, 
                "inode_total": 24637440, 
                "inode_used": 65524, 
                "mount": "/", 
                "options": "rw,relatime,attr2,inode64,noquota", 
                "size_available": 48134795264, 
                "size_total": 50432839680, 
                "uuid": "ef36deb2-4eae-40f6-824c-fb1ac0d1e009"
            }
        ], 
        "ansible_nodename": "node2", 
        "ansible_os_family": "RedHat", 
        "ansible_pkg_mgr": "yum", 
        "ansible_proc_cmdline": {
            "BOOT_IMAGE": "/vmlinuz-3.10.0-957.27.2.el7.x86_64", 
            "LANG": "en_US.UTF-8", 
            "crashkernel": "auto", 
            "quiet": true, 
            "rd.lvm.lv": [
                "centos/root", 
                "centos/swap"
            ], 
            "rhgb": true, 
            "ro": true, 
            "root": "/dev/mapper/centos-root"
        }, 
        "ansible_processor": [
            "0", 
            "GenuineIntel", 
            "Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz", 
            "1", 
            "GenuineIntel", 
            "Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz", 
            "2", 
            "GenuineIntel", 
            "Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz", 
            "3", 
            "GenuineIntel", 
            "Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz"
        ], 
        "ansible_processor_cores": 2, 
        "ansible_processor_count": 2, 
        "ansible_processor_threads_per_core": 1, 
        "ansible_processor_vcpus": 4, 
        "ansible_product_name": "VMware Virtual Platform", 
        "ansible_product_serial": "VMware-56 4d 88 88 a7 86 38 42-f2 d9 58 02 e1 63 fd 11", 
        "ansible_product_uuid": "88884D56-86A7-4238-F2D9-5802E163FD11", 
        "ansible_product_version": "None", 
        "ansible_python": {
            "executable": "/usr/bin/python", 
            "has_sslcontext": true, 
            "type": "CPython", 
            "version": {
                "major": 2, 
                "micro": 5, 
                "minor": 7, 
                "releaselevel": "final", 
                "serial": 0
            }, 
            "version_info": [
                2, 
                7, 
                5, 
                "final", 
                0
            ]
        }, 
        "ansible_python_version": "2.7.5", 
        "ansible_real_group_id": 0, 
        "ansible_real_user_id": 0, 
        "ansible_selinux": {
            "status": "disabled"
        }, 
        "ansible_selinux_python_present": true, 
        "ansible_service_mgr": "systemd", 
        "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMiIcHrss0DX+TBpGMOnQM8dO9LSZnI5QrANTegeCywBZBiYglYYLZWkZXRRlnEYAUTy9yPFx+tInTl2Bbo+RxA=", 
        "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIJf28PYx30H/aUjwMBCXHZQseLByG1UYXrUgftwMEsWa", 
        "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDI3Bc9IkpKs0gwrgB+Iu5Ao4sqBypnzGvkR23ryMk3mkqob1rdL9FEbUnFnUcMRY4WgbLr+TzUxIRUHLKuJX3uMbfuQrYz14EVTTck7PQIJ1X6AoHRK462cZxzvekHYxUxsKSuGg64UAEf7UoXM7Zghfm6Y0gF0CSmnLQzCD6PDmdhAfteV8JVTxTGKzKX02/fnrDrvqpcGeEo3vFCWEdrrlAKAu5j8llbq6BghRi7+6h+cEI3qAjJkpWB9fnVXDUUqKJh3WuR7lhcXaD7NLVHiw2JEtOhZlkQEQMFHCQNDo0+fhHCNVPKybt/Zt1X1VvpocqcUIBJIYMdRXdhwXLR", 
        "ansible_swapfree_mb": 2046, 
        "ansible_swaptotal_mb": 2047, 
        "ansible_system": "Linux", 
        "ansible_system_capabilities": [
            "cap_chown", 
            "cap_dac_override", 
            "cap_dac_read_search", 
            "cap_fowner", 
            "cap_fsetid", 
            "cap_kill", 
            "cap_setgid", 
            "cap_setuid", 
            "cap_setpcap", 
            "cap_linux_immutable", 
            "cap_net_bind_service", 
            "cap_net_broadcast", 
            "cap_net_admin", 
            "cap_net_raw", 
            "cap_ipc_lock", 
            "cap_ipc_owner", 
            "cap_sys_module", 
            "cap_sys_rawio", 
            "cap_sys_chroot", 
            "cap_sys_ptrace", 
            "cap_sys_pacct", 
            "cap_sys_admin", 
            "cap_sys_boot", 
            "cap_sys_nice", 
            "cap_sys_resource", 
            "cap_sys_time", 
            "cap_sys_tty_config", 
            "cap_mknod", 
            "cap_lease", 
            "cap_audit_write", 
            "cap_audit_control", 
            "cap_setfcap", 
            "cap_mac_override", 
            "cap_mac_admin", 
            "cap_syslog", 
            "35", 
            "36+ep"
        ], 
        "ansible_system_capabilities_enforced": "True", 
        "ansible_system_vendor": "VMware, Inc.", 
        "ansible_uptime_seconds": 270556, 
        "ansible_user_dir": "/root", 
        "ansible_user_gecos": "root", 
        "ansible_user_gid": 0, 
        "ansible_user_id": "root", 
        "ansible_user_shell": "/bin/bash", 
        "ansible_user_uid": 0, 
        "ansible_userspace_architecture": "x86_64", 
        "ansible_userspace_bits": "64", 
        "ansible_virtualization_role": "guest", 
        "ansible_virtualization_type": "VMware", 
        "discovered_interpreter_python": "/usr/bin/python", 
        "gather_subset": [
            "all"
        ], 
        "module_setup": true
    }, 
    "changed": false
}

在执行palybook时,有一个打印信息

TASK [Gathering Facts] ************************************************************************************************************************
ok: [demo5.example.com]

每次执行的时候,会有很多时间来获取这个变量,可以关闭这个操作

- hosts: demo5.example.com
  gather_facts: false
  vars_files:
    - users.yml
  tasks:
    - debug:
        var: users.natash.0.hobby.0
    - debug:
        msg: "{{ users.tom.0.hobby }}"
    - debug:
        msg: "{{ users['tom'][0]['hobby'] }}"

setup获取的这些信息,都是可用于该主机的变量。

使用方法和其他变量一样

- hosts: demo5.example.com
  #gather_facts: false
  vars_files:
    - users.yml
  tasks:
    - debug:
        msg: "{{ ansible_os_family }}"

执行

PLAY [demo5.example.com] **********************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************
ok: [demo5.example.com]
TASK [debug] **********************************************************************************************************************************
ok: [demo5.example.com] => {
    "msg": "RedHat"
}
PLAY RECAP ************************************************************************************************************************************
demo5.example.com          : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

手动设置fact

ansible除了能获取到预定义的fact的内容,还支持手动为某个主机定制fact。称之为本地fact。本地fact默认存放于被控端的/etc/ansible/facts.d目录下,如果文件为ini格式或者json格式,ansible会自动识别。以这种形式加载的fact是key为ansible_local的特殊变量。

下面是一个简单的示例,在ansibler主控端定义一个ini格式的custom.fact文件内容如下

[root@node1 ansible]# vim custom.fact

[general]
package = httpd
service = httpd
state = started

[root@node1 ansible]# vim push_facts.yml

- hosts: demo2.example.com
  tasks:
    - file:
        path: /etc/ansible/facts.d
        state: directory
    - copy:
        src: ./custom.fact
        dest: /etc/ansible/facts.d

[root@node1 ansible]# ansible-playbook push_facts.yml

PLAY [demo2.example.com] **********************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [demo2.example.com]

TASK [file] ***********************************************************************************************************************************
changed: [demo2.example.com]

TASK [copy] ***********************************************************************************************************************************
changed: [demo2.example.com]

PLAY RECAP ************************************************************************************************************************************
demo2.example.com          : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

[root@node1 ansible]# ansible demo2.example.com -m setup -a "filter=ansible_local"

demo2.example.com | SUCCESS => {
    "ansible_facts": {
        "ansible_local": {
            "custom": {
                "general": {
                    "package": "httpd", 
                    "service": "httpd", 
                    "state": "started"
                }
            }
        }, 
        "discovered_interpreter_python": "/usr/bin/python"
    }, 
    "changed": false
}

调用这个变量·
[root@node1 ansible]# vim use_users.yml

- hosts: demo2.example.com
  #gather_facts: false
  vars_files:
    - users.yml
  tasks:
    - debug:
        msg: "{{ ansible_os_family }}"
    - debug:
        msg: "{{ ansible_local.custom.general.package }}"

[root@node1 ansible]# ansible-playbook use_users.yml

TASK [debug] **********************************************************************************************************************************
ok: [demo2.example.com] => {
    "msg": "RedHat"
}

TASK [debug] **********************************************************************************************************************************
ok: [demo2.example.com] => {
    "msg": "httpd"
}

使用set_fact模块定义新的变量

set_fact模块可以自定义facts,这些自定义的facts可以通过template或者变量的方式在playbook中使用。如果你想要获取一个进程使用的内存的百分比,则必须通过set_fact来进行计算之后得出其值,并将其值在playbook中引用。

下面是一个set_fact模块的应用示例:

[root@node1 ansible]# cat set_fact_ex.yml

- name: set_fact example
  hosts: demo2.example.com
  tasks:
    - name: set facts
      set_fact: aaa=bbb
    - debug: 
        msg: "{{ aaa }}"

[root@node1 ansible]# ansible-playbook set_fact_ex.yml

TASK [debug] **********************************************************************************************************************************
ok: [demo2.example.com] => {
    "msg": "bbb"
}

[root@node1 ansible]# vim set_fact_ex.yml

- name: set_fact example
  hosts: demo2.example.com
  tasks:
    - name: set facts
      set_fact: aaa=bbb
    - debug: 
        msg: "{{ ansible_memtotal_mb }}"

root@node1 ansible]# ansible-playbook set_fact_ex.yml

TASK [debug] **********************************************************************************************************************************
ok: [demo2.example.com] => {
    "msg": 1819
}

使用计算方式,获取一个值

[root@node1 ansible]# cat set_fact_ex.yml

- name: set_fact example
  hosts: demo2.example.com
  tasks:
    - name: set facts
      set_fact: half_memetotal={{ ansible_memtotal_mb/2 |int }}
    - debug: 
        msg: "{{ half_memetotal }}"

执行

[root@node1 ansible]# ansible-playbook set_fact_ex.yml

TASK [debug] **********************************************************************************************************************************
ok: [demo2.example.com] => {
    "msg": "909.5"
}

手动采集fact

通常情况下,我们在运行play的时候,ansible会先尝试ssh到被控端采集fact,如果此时,被控制端的ssh还没有完全启动,就会导致整个play执行失败。这个时候,我们可以先显示的关闭fact采集,然后在task中通过wait_for等待被控端ssh端口被正常监听,再在task中使用setup模块来手动采集fact:

- name: Deploy apps
  hosts: webservers
  gather_facts: False
  tasks:
    - name: wait for ssh to be running
      local_action: wait_for port=22 host="{{ inventory_hostname }}" search_regex=OpenSSH
    - name: gather facts
      setup:
......

启用fact缓存

如果在play中需要引入fact,则可以开启fact缓存。fact缓存目前支持三种存储方式,分别为JSON、memcached、redis。

Json文件fact缓存后端

使用JSON文件作为fact缓存后端的时候,ansible将会把采集的fact写入到控制主机的文件中。

[root@node1 ansible]# vim ansible.cfg

[defaults]
gathering = smart
#缓存时间,单位为秒
fact_caching_timeout = 86400    
fact_caching = jsonfile
#指定ansible包含fact的json文件位置,如果目录不存在,会自动创建
fact_caching_connection = /tmp/ansible_fact_cache

选项说明:

  • gathering:是否启用fact,有三个选项
  • sunart:默认收集 facts,但在 facts已有的情况下就不收集,即使用 facts缓存
  • implicit:默认收集 facts,要禁止收集,必须显式的申明: gather facts: false
  • explicit:默认不收集,要收集,必须显示的申明: gather facts:true
  • fact cacheing timeout:缓存时间,单位为s
  • fact caching:缓存的方式,支持 jsonfile、 redis、 memcached
  • fact caching connection:指定 ansible缓存fac的连接方式,如果是 jsonfile,则指定 jsonfile的缓存路径

[root@node1 ansible]# mkdir /tmp/ansible_fact_cache
root@node1 ansible]#ansible-playbook set_fact_ex.yml
[root@node1 ansible]# cat /tmp/ansible_fact_cache/demo2.example.com
这里存放就是fact相关的缓存

Redis fact缓存后端

使用redis作为fact缓存后端,需要在控制主机上安装redis服务并保持运行。需要安装python操作redis的软件包。

ansible.cfg配置如下:

[defaults]
gathering = smart
fact_caching_timeout = 86400 
fact_caching = redis
Memcached fact缓存后端

使用memcached作为fact缓存后端,需要在控制主机上安装Memcached服务并保持运行,需要安装python操作memcached的软件包。

ansible.cfg配置如下:

[defaults]
gathering = smart
fact_caching_timeout = 86400 
fact_caching = memcached

魔法变量

Ansible默认会提供一些内置的变量以实现一些特定的功能,我们称之为魔法变量。下面列举一些常用的魔法变量

hostvars

获取某台指定的主机的相关变量。如果有一台web服务器的配置文件中需要指定db服务器的ip地址,我们假定这台db服务器的hostname为db.exmaple.com,ip地址绑定在eth0网卡上,我们可以通过如下方法在web服务器上调用db服务器的ip地址:

[root@node1 ansible]# vim var_test.yml

- hosts: demo2.example.com
  tasks:
    - debug:
        msg: "{{ ansible_ens33.ipv4.address }}"

[root@node1 ansible]# ansible-playbook var_test.yml

TASK [debug] **********************************************************************************************************************************
ok: [demo2.example.com] => {
    "msg": "192.168.132.132"     #这里获取的demo2.example.com
}

获取demo3.example.com的地址

- hosts: demo2.example.com
  tasks:
    - debug:
        msg: "{{ hostvars['demo3.example.com'].ansible_ens33.ipv4.address }}"

[root@node1 ansible]# ansible-playbook var_test.yml

TASK [debug] **********************************************************************************************************************************
ok: [demo2.example.com] => {
    "msg": "192.168.132.133"
}

inventory_hostname

inventory_hostname是Ansible所识别的当前正在运行task的主机的主机名。如果在inventory里定义过别名,那么这里就是那个别名,如果inventory包含如下一行:

demo6.example.com  ansible_ssh_host=192.168.132.132

则inventory_hostname即为demo6.example.com

利用hostvars和inventory_hostname变量,可以输出与当前主机相关联的所有变量:

[root@node1 ansible]# vim var_test.yml

- hosts: demo6.example.com
  tasks:
    - debug:
        msg: "{{ inventory_hostname }}"
    - debug:
        msg: "{{ ansible_fqdn }}"

[root@node1 ansible]# ansible-playbook var_test.yml

TASK [debug] **********************************************************************************************************************************
ok: [demo6.example.com] => {
    "msg": "demo6.example.com"   #别名
 }
TASK [debug] **********************************************************************************************************************************
ok: [demo6.example.com] => {
    "msg": "node2"    #获取了本身的hostname
}

group_names

用于标识当前正在执行task的目标主机位于的主机组。假如我们有三台主机,用来配置成一主二从的mysql服务器。inventory配置如下:

demo6.example.com ansible_ssh_host=192.168.132.132
demo5.example.com
demo3.example.com
[datacenter1]
demo1.example.com
demo2.example.com

[datacenter2]
demo3.example.com
demo4.example.com

[webserver]
demo1.example.com
demo2.example.com
demo3.example.com

[mysqlserver]
demo4.example.com
demo5.example.com

[datacenters:children]
datacenter1
datacenter2

[root@node1 ansible]# vim group_name.yml

- hosts: all
  tasks:
    - debug:
        msg: "{{ group_names }}"

[root@node1 ansible]# ansible-playbook group_name.yml

ok: [demo6.example.com] => {
    "msg": [
        "ungrouped"
    ]
}
ok: [demo4.example.com] => {
    "msg": [
        "datacenter2", 
        "datacenters", 
        "mysqlserver"
    ]
}
ok: [demo5.example.com] => {
    "msg": [
        "mysqlserver"
    ]
}
ok: [demo1.example.com] => {
    "msg": [
        "datacenter1", 
        "datacenters", 
        "webserver"
    ]
}
ok: [demo2.example.com] => {
    "msg": [
        "datacenter1", 
        "datacenters", 
        "webserver"
    ]
}
ok: [demo3.example.com] => {
    "msg": [
        "datacenter2", 
        "datacenters", 
        "webserver"
    ]
}

groups

groups是inventory中所有主机组的列表,可用于枚举主机组中的所有主机。

[root@node1 ansible]# vim group_name.yml

- hosts: demo2.example.com
  tasks:
    - debug:
        msg: "{{ groups.webserver }}"

[root@node1 ansible]# ansible-playbook group_name.yml

TASK [debug] **********************************************************************************************************************************
ok: [demo2.example.com] => {
    "msg": [
        "demo1.example.com", 
        "demo2.example.com", 
        "demo3.example.com"
    ]
}

应用场景,需要把一组下面的被控端都安装配置文件

[root@node1 ansible]# cat files/haproxy.cfg

{% for i in groups.webserver%}
    server {{ i }};
{% endfor %}

[root@node1 ansible]# vim group_name.yml

- hosts: demo2.example.com
  tasks:
    - package:
        name: haproxy
        state: installed
    - template:
        src: ./files/haproxy.cfg
        dest: /etc/haproxy.cgf

执行

[root@node1 ansible]# ansible-playbook group_name.yml
[root@node1 ansible]# ansible demo2.example.com -m shell -a 'cat /tmp/haproxy.cgf'

demo2.example.com | CHANGED | rc=0 >>
    server demo1.example.com;
    server demo2.example.com;
    server demo3.example.com;

其他变量

1 play_hosts
当前playbook会在哪些hosts上运行

  1. inventory_dir
    主机清单所在目录
  2. inventory_file
    主机清单文件

变量优先级

1. extra vars(命令中-e)最优先
2 inventory主机清单中连接变量( ansible ssh user等)
3.play中vars、 vars files等
4.剩余的在 Inventory中定义的变量
5.系统的 facts变量
6.角色走义的默认变量 roles/ rolesname/ defaults/ main ym)

使用lookup生成变量

简单说明

在通常情况下,所有的配置信息都会被作为ansible的变量保存了,而且可以保存在ansible允许定义变量的各种地方,诸如vars区段,vars_files加载的文件中,以及host_vars和group_vars目录中。

但在有些时候,我们希望从诸如文本文件或者.csv文件中收集数据作为ansible的变量,或者直接获取某些命令的输出作为ansible的变量,甚至从redis或者etcd这样的键值存储中取得相应的值作为ansible的变量。这个时候,我们就需要通过ansible的lookup插件来从这些数据源中读取配置数据,传递给ansbile变量,并在playbook或者模板中使用这些数据。

ansible支持一套从不同数据源获取数据的lookup,包括file, password, pipe, env, template, csvfile, dnstxt, redis_kv, etcd等

file

使用file lookup可以从文本文件中获取数据,并在这些数据传递给ansible变量,在task或者jinja2模板中进行引用。下面是一个从文本文件中获取ssh公钥并复制到远程主机的示例:

[root@node1 ansible]# vim lookup_files_ex.yml

- hosts: demo2.example.com
  tasks:
    - debug:
        msg: "{{ lookup('file','./hosts') }}"

[root@node1 ansible]# ansible-playbook lookup_files_ex.yml

TASK [debug] **********************************************************************************************************************************
ok: [demo2.example.com] => {
    "msg": "srv1.example.com\nsrv2.example.com\ns1.lab.example.com\ns2.lab.example.com\n\n[web]\njupiter.lab.example.com\nsaturn.example.com\n\n[db]\ndb1.example.com\ndb2.example.com\ndb3.example.com\n\n[lb]\nlb1.lab.example.com\nlb2.lab.example.com\n\n[boston]\ndb1.example.com\njupiter.lab.example.com\nlb2.lab.example.com\n\n[london]\ndb2.example.com\ndb3.example.com\nfile1.lab.example.com\nlb1.lab.example.com\n\n[dev]\nweb1.lab.example.com\ndb3.example.com\n\n[stage]\nfile2.example.com\ndb2.example.com\n\n[prod]\nlb2.lab.example.com\ndb1.example.com\njupiter.lab.example.com\n\n[function:children]\nweb\ndb\nlb\ncity\n\n[city:children]\nboston\nlondon\nenvironments\n\n[environments:children]\ndev\nstage\nprod\nnew\n\n[new]\n172.25.252.23\n172.25.252.44"
}

可以把这个获取的值,使用set_fact变量

- hosts: demo2.example.com
  tasks:
    - set_fact: aaa={{ lookup('file','./hosts') }}
    - debug:
        msg: "{{ aaa }}"

pipe

使用pipe lookup可以直接调用外部命令,并将命令执行的结果打印到标准输出,作为ansible变量。下面的例子通过pipe调用date指令拿到一个以时间数字组成的字串,获取的是服务端命令

[root@node1 ansible]# vim lookup_pipe_ex.yml

- hosts: demo2.example.com
  tasks:
    - debug:
        msg: "{{ lookup('pipe','ip addr') }}"

[root@node1 ansible]# ansible-playbook lookup_pipe_ex.yml

TASK [debug] **********************************************************************************************************************************
ok: [demo2.example.com] => {
    "msg": "1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000\n    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00\n    inet 127.0.0.1/8 scope host lo\n       valid_lft forever preferred_lft forever\n    inet6 ::1/128 scope host \n       valid_lft forever preferred_lft forever\n2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000\n    link/ether 00:0c:29:91:dd:19 brd ff:ff:ff:ff:ff:ff\n    inet 192.168.132.131/24 brd 192.168.132.255 scope global noprefixroute ens33\n       valid_lft forever preferred_lft forever\n    inet6 fe80::bcf9:af19:a325:e2c7/64 scope link noprefixroute \n       valid_lft forever preferred_lft forever\n3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default \n    link/ether 02:42:00:5f:59:93 brd ff:ff:ff:ff:ff:ff\n    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0\n       valid_lft forever preferred_lft forever\n    inet6 fe80::42:ff:fe5f:5993/64 scope link \n       valid_lft forever preferred_lft forever"
}

env

env lookup实际就是获取在控制主机上的某个环境变量的值。下面是一个读取控制机上$JAVA_HOME变量值的示例:

- name: get JAVA_HOME
  debug: msg="{{ lookup('env', 'JAVA_HOME')}}"

csvfile

csvfile可以从.csv文件中读取一个条目。假设我们有如下示例的名为users.csv的文件:

[root@node1 ansible]# vim test.csv

username,email,gender
lorin,lorin@test.com,female
john,john@example.com,female
sue,sue@exmaple.com,male

[root@node1 ansible]# vim lookup_csvf_ex.yml

- name: get sue's email
  hosts: demo2.example.com  
  tasks:
    - debug: 
        msg: "{{ lookup('csvfile','sue file=test.csv delimiter=, col=1')}}"

可以看到,一共向插件传递了四个参数:sue, file=test.csv, delimiter=,以及col=1。说明如下:

  • 第一个参数指定一个名字,该名字必须出现在其所在行的第0列,需要说明的是,如果指定的第一个参数名字在文件中出现多次,则匹配第一次出现的结果
  • 第二个参数指定csv文件的文件名
  • 第三个参数指定csv文件的中条目的分隔符
  • 第四个参数指定要取得哪一列的值,这一列正是第一个参数所在行的那一列的值

[root@node1 ansible]# ansible-playbook lookup_csvf_ex.yml

TASK [debug] **********************************************************************************************************************************
ok: [demo2.example.com] => {
    "msg": "sue@exmaple.com"
}

使用pipe,执行awk
[root@node1 ansible]# cat lookup_csvf_ex.yml

- name: get sue's email
  hosts: demo2.example.com  
  tasks:
    - debug: 
        msg: "{{ lookup('csvfile','sue file=test.csv delimiter=, col=1')}}"
    - debug: 
        msg: lookup('pipe',"awk -F , ' ~/sue/ {print }' test.csv" )

执行

[root@node1 ansible]# ansible-playbook lookup_csvf_ex.yml

TASK [debug] **********************************************************************************************************************************
ok: [demo2.example.com] => {
    "msg": "sue@exmaple.com"
}
TASK [debug] **********************************************************************************************************************************
ok: [demo2.example.com] => {
    "msg": "lookup('pipe',\"awk -F , ' ~/sue/ {print }' test.csv\" )"
}

redis_kv

redis_kv lookup可以直接从redis存储中来获取一个key的value,key必须是一个字符串,如同Redis GET指令一样。需要注意的是,要使用redis_kv lookup,需要在主控端安装python的redis客户端,在centos上,软件包为python-redis。

下面是一个在playbook中调用redis lookup的task,从本地的redis中取中一个key为weather的值:

- name: lookup value in redis
  debug: msg="{{ lookup('redis_kv', 'redis://localhost:6379,weather')}}"

其中URL部分如果不指定,该模块会默认连接到redis://localhost:6379,所以实际上在上面的实例中,调用可以直接写成如下:

{{ lookup('redis_kv', 'weather')}}

etcd

etcd是一个分布式的key-value存储,通常被用于保存配置信息或者被用于实现服务发现。可以使用etcd lookup来从etcd中获取指定key的value。

我们通过如下方法往一个etcd中写入一个key:

curl -L http://127.0.0.1:2379/v2/keys/weather -XPUT -d value=sunny

定义一个调用etcd插件的task:

- name: look up value in etcd
  debug: msg="{{ lookup('etcd','weather')}}"

默认情况下,etcd lookup会在http://127.0.0.1:4001上查找etcd服务器。但我们在执行playbook之前可以通过设置ANSIBLE_ETCD_URL环境变量来修改这个设置。

password

password lookup会随机生成一个密码,并将这个密码写入到参数指定的文件中。如下示例,创建一个名为bob的mysql用户,并随机生成该用户的密码,并将密码写入到主控端的bob-password.txt中:

- name: create deploy mysql user
  mysql_user: name=bob password={{ lookup('password', 'bob-password,txt')}} priv=*.*:ALL state=present

dnstxt

dnstxt lookup用于获取指定域名的TXT记录。需要在主控端安装python-dns。

使用方法如下:

- name: lookup TXT record
  debug: msg="{{ lookup('dnstxt', "aliyun.com") }}"

如果某一个主机有多个相关联的TXT记录,那么模块会把他们连在一起,并且每次调用时的连接顺序可能不同




https://www.xamrdz.com/backend/3u81942354.html

相关文章: