宜蘭縣教育支援平台 會員登入 會員註冊 我的i教書

參考網頁:
Root Password Reset - Proxmox VE

方法一:
1. 在 grub 開機畫面按 e


2. 找到 linux /boot/vmlinuz.. 這一行,在最後面加上 init=/bin/bash
    按 Ctrl+xF10 啟動系統


 (閱讀全文)

1. 將產生的 .vmdk 上傳到 Proxmox Server


2. 進行轉換
-f 來源格式
-O 轉換格式
-p 顯示進度
# qemu-img convert -f vmdk Nginx-disk1.vmdk -O qcow2 Nginx.qcow2 -p
    (100.00/100%)

3. 轉換後比較
# ls -l Nginx*
-rw-r--r-- 1 root root  872952320 Mar 28 10:24 Nginx-disk1.vmdk
-rw-r--r-- 1 root root 2101805056 Mar 28 10:43 Nginx.qcow2

 (閱讀全文)

目前學校是使用 Proxmox Cluster 來當做虛擬機器,有一天突然無預警斷電,之後復電後,有一台 Proxmox Server 沒有自動開機啟動,所以其它台的 Proxmox Server 上的虛擬機器要執行時,都會出現下面的錯誤訊息。
# pct start 200
cluster not ready - no quorum?

# pvecm status
Quorum information
------------------
Date:             Sat Mar 18 14:04:32 2017
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000002
Ring ID:          2/160
Quorate:          No

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      1
Quorum:           2 Activity blocked
Flags:

Membership information
----------------------
    Nodeid      Votes Name
0x00000002          1 192.168.1.39 (local)


造成 Cluster 錯誤的原因中,最常見的就是節點之間網路的斷線, Cluster 中節點數量低於 2 時, Cluster 就會被鎖住,會陷入「no quorum」的狀態。這是因為 Proxmox VE Cluster 預設期待 Cluster 中必須要有 2 個節點(Node)以上,可是因為網路斷線的緣故,才會出現「Quorum: 2 Activity blocked」這個狀態。

 (閱讀全文)

方式很多種
以下針對 VMware 所架設的虛擬機器
1. 使用 lshw 指令
# yum install lshw
# lshw | grep -m 1 product
    product: VMware Virtual Platform

2. 使用 lspci 指令
# yum install pciutils
# lspci | grep -m 1 System
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)

3. 使用 dmidecode 指令
# yum install dmidecode
# dmidecode | grep -m 1 Product
        Product Name: VMware Virtual Platform

 (閱讀全文)

經過一個多月的研究和學習,把學校的 Server 做了一下整理,除了 NAT / File Server 之外,大部分都移植到 Proxmox 上了,建立叢集,互為備援。
再來如果有空的話,會稍為研究一下 VMware vSphere Hypervisor ESXi

雖然已經很習慣英文版的介面,但無聊閒暇之餘,還是自己動手改了一下!

 (閱讀全文)

1. 將更新的 Server 都指向國家高速網路中心
$ sudo cp /etc/apt/sources.list /etc/apt/sources.list.$(date +%F)
$ sudo sed -i 's/ftp.debian.org/free.nchc.org.tw/g' /etc/apt/sources.list

清除所有的
$ sudo apt-get clean all
更新套件庫
$ sudo apt-get update

2. 進行套件更新
$ sudo apt-get upgrade

 (閱讀全文)

在 Proxmox LXC 安裝 OpenVPN,並設定完成啟動後,出現下面的錯誤訊息
# systemctl status openvpn@server.service
● openvpn@server.service - OpenVPN connection to server
   Loaded: loaded (/lib/systemd/system/openvpn@.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Mon 2017-02-20 19:17:06 CST; 6s ago
     Docs: man:openvpn(8)
           https://community.openvpn.net/openvpn/wiki/Openvpn23ManPage
           https://community.openvpn.net/openvpn/wiki/HOWTO
  Process: 1585 ExecStart=/usr/sbin/openvpn --daemon ovpn-%i --status /run/openvpn/%i.status 10 --cd /etc/openvpn --script-security 2 --config /etc/openvpn/%i.conf --writepid /run/openvpn/%i.pid (code=exited, st
 Main PID: 1586 (code=exited, status=1/FAILURE)

Feb 20 19:17:06 vpn systemd[1]: Starting OpenVPN connection to server...
Feb 20 19:17:06 vpn systemd[1]: openvpn@server.service: PID file /run/openvpn/server.pid not readable (yet?) after start: No such file or directory
Feb 20 19:17:06 vpn systemd[1]: Started OpenVPN connection to server.
Feb 20 19:17:06 vpn systemd[1]: openvpn@server.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 19:17:06 vpn systemd[1]: openvpn@server.service: Unit entered failed state.
Feb 20 19:17:06 vpn systemd[1]: openvpn@server.service: Failed with result 'exit-code'.

 (閱讀全文)

啟動 fail2ban 出現錯誤訊息
# systemctl status fail2ban
● fail2ban.service - Fail2Ban Service
   Loaded: loaded (/usr/lib/systemd/system/fail2ban.service; enabled; vendor preset: disabled)
   Active: failed (Result: start-limit) since Fri 2017-02-17 12:46:16 CST; 2min 55s ago
     Docs: man:fail2ban(1)
  Process: 972 ExecStart=/usr/bin/fail2ban-client -x start (code=exited, status=255)

Feb 17 12:46:16 NPC11 systemd[1]: Failed to start Fail2Ban Service.
Feb 17 12:46:16 NPC11 systemd[1]: Unit fail2ban.service entered failed state.
Feb 17 12:46:16 NPC11 systemd[1]: fail2ban.service failed.
Feb 17 12:46:16 NPC11 systemd[1]: fail2ban.service holdoff time over, sche...t.
Feb 17 12:46:16 NPC11 systemd[1]: start request repeated too quickly for f...ce
Feb 17 12:46:16 NPC11 systemd[1]: Failed to start Fail2Ban Service.
Feb 17 12:46:16 NPC11 systemd[1]: Unit fail2ban.service entered failed state.
Feb 17 12:46:16 NPC11 systemd[1]: fail2ban.service failed.
Hint: Some lines were ellipsized, use -l to show in full.

檢查 /var/log/message 中的紀錄
# grep -i fail2ban /var/log/message
Feb 17 04:46:15 NPC11 systemd: Starting Fail2Ban Service...

Feb 17 04:46:15 NPC11 fail2ban-client: ERROR  There is no directory /var/run/fail2ban to contain the socket file /var/run/fail2ban/fail2ban.sock.
Feb 17 04:46:15 NPC11 systemd: fail2ban.service: control process exited, code=exited status=255
Feb 17 04:46:15 NPC11 systemd: Failed to start Fail2Ban Service.
Feb 17 04:46:15 NPC11 systemd: Unit fail2ban.service entered failed state.
Feb 17 04:46:15 NPC11 systemd: fail2ban.service failed.

看起來似乎是在 /var/run/fail2ban 目錄下找不到 fail2ban.sock 這一個檔案

 (閱讀全文)

掃描目前實體卷冊的狀態
# pvscan
  PV /dev/sda3   VG pve   lvm2 [118.99 GiB / 14.61 GiB free]
  Total: 1 [118.99 GiB] / in use: 1 [118.99 GiB] / in no VG: 0 [0   ]

掃描目前卷冊群組的狀態
# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "pve" using metadata type lvm2

顯示目前系統上面的 VG 狀態
# vgdisplay pve
  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  80
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                5
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               118.99 GiB
  PE Size               4.00 MiB
  Total PE              30461
  Alloc PE / Size       26720 / 104.38 GiB
  Free  PE / Size       3741 / 14.61 GiB
  VG UUID               M7GUTE-om2m-DMcv-1D0G-o3FQ-Ta3I-HCsZa3

 (閱讀全文)
1 2 3 4 5  下一篇»