Why use /etc/fstab instead of Kodi’s built in NFS client? Using /etc/fstab is faster than Kodi’s own NFS client – it delivers better throughput and is more reliable (also than SMB mounting). Many performance issues, especially with high-bitrate content can be solved by using NFS shares and /etc/fstab. Additionally, it’s quite easy to set up.

Preparation:
You will need to know the following information
1.The IP address of the system where your media files are shared from. (in this tutorial, i will be using 192.168.1.5)
2.The directory used by the NFS share on your NAS. Use the following command to find the correct export path for your NAS

showmount -e IP_of_your_NAS

3. Mount point in OSMC. (in this tutorial, i will be using /mnt/NFS_Share)

Edit your /etc/fstab file:

sudo nano /etc/fstab

Go to the end of the file (use the down arrow key) and add this line:

192.168.1.5:/mnt/array1/share /mnt/NFS_Share    nfs     noauto,x-systemd.automount  0  0

Once done editing /etc/fstab, save the file and exit nano /etc/fstab with CTRL+X and Y for “yes”.

Now verify that there are no errors in your fstab file:

sudo mount -a

Once you get a prompt with no errors, you will need to reload systemd:

sudo systemctl daemon-reload
sudo systemctl restart remote-fs.target

At this point, your shares should just work. To test, simply try to go to the share:

cd /mnt/NFS_Share 
ls

Source: https://discourse.osmc.tv/t/configuring-fstab-based-nfs-share-mounts/69953

The BGP aggregate-address can be used to summarise a set of networks into a single prefix. For this post, I just wanted to show the difference between aggregate-address and aggregate-address with summary only.

We have below topology. I’m going to summarise prefixes in R1.

R1 config

hostname R1
!
interface GigabitEthernet0/0
 ip address 10.10.10.1 255.255.255.252
!
router bgp 10
 bgp log-neighbor-changes
 network 192.168.1.0
 network 192.168.2.0
 network 192.168.3.0
 neighbor 10.10.10.2 remote-as 20
!
ip route 192.168.1.0 255.255.255.0 Null0
ip route 192.168.2.0 255.255.255.0 Null0
ip route 192.168.3.0 255.255.255.0 Null0
!

R2 config

hostname R2
!
interface GigabitEthernet0/0
 ip address 10.10.10.2 255.255.255.252
!
router bgp 20
 bgp log-neighbor-changes
 neighbor 10.10.10.1 remote-as 10
!

Case 1: without aggregate-address

R2#sh ip bgp
     Network          Next Hop            Metric LocPrf Weight Path
 *>  192.168.1.0      10.10.10.1               0             0 10 i
 *>  192.168.2.0      10.10.10.1               0             0 10 i
 *>  192.168.3.0      10.10.10.1               0             0 10 i

Case 2: with aggregate-address
R1 config

router bgp 10
 bgp log-neighbor-changes
 network 192.168.1.0
 network 192.168.2.0
 network 192.168.3.0
 aggregate-address 192.168.0.0 255.255.252.0
Router#sh ip bgp
     Network          Next Hop            Metric LocPrf Weight Path
 *>  192.168.0.0/22   10.10.10.1               0             0 10 i
 *>  192.168.1.0      10.10.10.1               0             0 10 i
 *>  192.168.2.0      10.10.10.1               0             0 10 i
 *>  192.168.3.0      10.10.10.1               0             0 10 i

Note that we will be having the original /24 routes (longer prefix) and summarised /22 route.

Case 3: aggregate-address with summary only
R1 config

router bgp 10
 bgp log-neighbor-changes
 network 192.168.1.0
 network 192.168.2.0
 network 192.168.3.0
 aggregate-address 192.168.0.0 255.255.252.0 summary-only
R2#sh ip bgp
     Network          Next Hop            Metric LocPrf Weight Path
 *>  192.168.0.0/22   10.10.10.1               0             0 10 i

All the longer-prefixes inside of the aggregate address are suppressed.

So I just noticed that the previous method only support NFS v2. ESXi require NFS v3 at minimum. So here’s the guide for NFS v3 on Buffalo Linkstation.

Step 1:
Gain SSH and optware (check my previous post)

Step 2:
Check for unfs package

#ipkg update
#ipkg list | grep unfs
unfs3 - 0.9.22-1 - Version 3 NFS server (not recommended, use nfs-utils instead)

Step 3:

Install unfs
#ipkg install unfs3
#ipkg install portmap (optional)

Step 4:
Configure

#nano /opt/etc/exports
/mnt/array1/share (rw,no_root_squash)

(ctrl-x to quit and save)

Restart services

#/opt/etc/init.d/S55portmap restart
#/opt/etc/init.d/S56unfsd restart

You can try mount this NFS from ESXi or ubuntu server

#apt-get install rpcbind nfs-common
#mkdir -p /mnt/mynfsshare
#mount IP-of-your-NFS-Server:/mnt/array1/share /mnt/mynfsshare/
#df -kh

When configuring NFS shares to use as network sources for Kodi’s music/video libraries, use the following format: nfs://1.2.3.4/path/to/folder (where “1.2.3.4” should be replaced with the IP of your NFS server, and “/path/to/folder” should be replaced with the path to the folder you want to share; a double slash between the server and the path is wrong, you should not use nfs://1.2.3.4//path/to/folder)

Reference(s):
http://web.archive.org/web/20151207200629/http://forum.buffalo.nas-central.org/viewtopic.php?t=6531&start=15
https://serverfault.com/questions/554215/nfs-mount-with-nfs-3
https://help.ubuntu.com/community/SettingUpNFSHowTo#Installation

SMB sucks when compared to NFS. Here’s how to enable NFS in our Linkstation.

Step 1
The first step is to gain ssh root access to this Linkstation. Refer here.

Step 2
Install NFS
# ipkg update
# ipkg install nfs-server

Step 3
To configure your exports you need to edit the configuration file /opt/etc/exports. My example is this:

/mnt/array1/backups 10.0.0.10(rw,sync)
/mnt/array1/films 10.0.0.10(rw,sync)
/mnt/array1/tv 10.0.0.10(rw,sync)

or just allow the whole subnets
/mnt/array1/backups 10.0.0.0/24(rw,sync)

Once that file has been updated you’ll need to restart NFS:
# /opt/etc/init.d/S56nfsd stop
# /opt/etc/init.d/S56nfsd start

If you receive this error when restarting NFS daemon:
Cannot register service: RPC: Unable to receive; errno = Connection refused

Try checking portmapper
#rpcinfo -p
rpcinfo: can't contact portmapper: RPC: Remote system error - Connection refused

Easy, just restart the portmapper daemon
#/opt/etc/init.d/S55portmap stop
#/opt/etc/init.d/S55portmap start

Then repeat above step to restart NFS.

Reference(s:
https://github.com/skx/Buffalo-220-NAS
https://maazanjum.com/2014/02/17/starting-nfs-quotas-cannot-register-service-rpc-unable-to-receive-errno-connection-refused/

There are plenty of tutorial on how to install Xpenology in virtual environment. I tried few method a year ago and given up. Recently I found newer how-to and success after hitting few bumps. You can refer to this post if you want to try on your own and learn something new along the path. I have decided to improvise it and prepare a ready made OVA file, to ease you importing this VM (or just being lazy to follow the step by step guide :)).

First of all, please download the OVA files from here. AFter that, unzip it and start importing the OVA from your ESXi or VMWare Workstation. This OVA file was exported from ESXi 6.7, and running DSM 6.2.1 – latest version as of this post).

Drag all the 3 files to the import window

Choose Thick for best performance (pre-allocate disk space)

Increase Disk 2 if needed

Click finish to complete import and power on the VM

Wait for 1 minute and locate yoru DSM VM. The VM will obtain ip address from DHCP (you can check from your DHCP server) or just type find.synology.com in your web browser – it will scan your network and find the DSM

When DSM is located, click Connect

Enter your DSM info here

Click “Skip this step”

Go to Storage Manager to create your first disk pool

Drag the available disk from the left to the right

Next is to create volume pool

This is optional but recommended step. Download openvm-tool and install it using Package Center. This is to enable ESXi to have visibility on this VM such as IP Address and allow you to gracefully shutdown the VM from vSphere.

VM info is displayed correctly after open-vm-tool is installed.

Posted in NAS.

I have few Raspberry Pi 2 (RPi2) laying around and doing nothing. For this project, I will be building a centralized log server on RPi2 for my home usage.

1. I’m using DietPi for the OS for simple reason, it is extremely lightweight and having the lowest memory footprint. I will not be covering the image installation to the RPi2 sdcard because that is pretty common process and I assume everyone is already know doing that. Get it from here https://dietpi.com/

2. Once DietPi is installed, ssh to it and complete the initial setup and update.

3. I’m using remotesyslog for for the log collector. If you need advance feature, you may explore Graylog2. Follow the remotesyslog installation guide from here https://www.remotesyslog.com/legacy/

4. Configure your devices to send the logs to this remotesyslog.

5. There are 2 ways of viewing the logs, through the CLI and Web UI.

6. Access the remotesyslog by launching any web browser and you may see below screen.

7. SSH to the remotesyslog and run rsview to see the logs form the terminal.

1. Make sure the telnet in Administration > System tab > Telnet is enabled. Or SSH enabled if you prefer SSH.
2. Telnet to your router using Putty tool, login. Make sure putty is tick Telnet. If SSH then SSH ticked.

3. Type in command “nvram show | grep asus_device_list
You will see something similar to this, below AC68U is my wifi SSID, same goes to MAC and router IP.
Make sure you copy yours, not mine. lol

Sample result:
asus_device_list=<3>TENDA>192.168.1.1>D8:65:63:D4:3D:40>0>AC68U>255.255.255.0>1

4. Copy the entire string above except “asus_device_list=” and also replace “TENDA” to “RT-AC68U

Command: nvram set asus_device_list=”< paste the string starting from <3> until 255.255.255.0>1 >”

Sample:
nvram set asus_device_list=”<3>TENDA>192.168.1.1>D8:65:63:D4:3D:40>0>AC68U>255.255.255.0>1″

5. Type in command “nvram show | grep asus_device_list” again to check whether it has the latest changes you made.

6. Next, type in “nvram show | grep odmpid”
You will see it’s showing TENDA

7. Type in nvram set odmpid=RT-AC68U (For this part, after commit & reboot, if you issue “nvram show | grep odmpid” again it will be empty, but it still works. Need other sifu to comment on this part)

8. Type in “nvram show | grep odmpid” to check again.

9. Check your setting with this command, nvram show | grep RT-AC68U
computer_name=RT-AC68U

odmpid=RT-AC68U
asus_device_list=<3>RT-AC68U>192.168.1.1>D8:65:63:D4:3D:40>0>AC68U>255.255.255.0>1

10. Type in nvram commit to to apply.

11. Type in “reboot” and router will reboot. 

12. Download ASUS router app to try

source: https://forum.lowyat.net/index.php?showtopic=4504268&view=findpost&p=90503295

Most of the time in a small network, we will be using the Layer 3 device as a default gateway and a DHCP server. And most of the time also, we will be excluding the gateway’s ip address from the dhcp pool. Just to save 1 more configuration line and for the sake of knowledge, the IP address configured on the router interface is automatically excluded from the DHCP address pool :). You need to exclude addresses from the pool if the DHCP server should not allocate those IP addresses.

Documentation

Objectives

We are going to achieve 2 things here.
1. Install the OpenVZ OS
2. Install Ruby 1.8
3. Install the OpenVZ Web Panel

Install the OpenVZ OS

1. Get the ISO from https://download.openvz.org/virtuozzo/releases/7.0/x86_64/iso/
2. Install it as usual

Install Ruby 1.8

[[email protected]]#command curl -sSL https://rvm.io/mpapis.asc | gpg2 --import -
[[email protected]]#\curl -sSL https://get.rvm.io | bash -s stable

Logout or restart ssh session

[[email protected]]#rvm install 1.8.7

Install the OpenVZ Web Panel

1. SSH to OpenVZ
2. Download OpenVZ Web Panel from github then unzip it

[[email protected] ~]# wget https://github.com/sibprogrammer/owp/archive/master.zip
[[email protected] ~]# unzip master.zip

3. Install the script and ruby dependencies

[[email protected] ~]# cd owp-master/installer/
[[email protected] installer]# chmod 777 ai.sh
[[email protected] installer]# ./ai.sh

4. Access the Web Panel
http://ip:3000
login with admin/admin

Objectives:
1. To build white box for running ESXi
2. Support up to 64GB DDR4
3. Total power consumption below 30 watt on idle
4. Expandable, with PCI expansion slots and multiple SATA ports
5. Cheap as possible

Part lists (as of October 2017)
Intel – Pentium G4560 3.5GHz Dual-Core Processor RM 320.00 (Lazada)
-poor-man’s Core i7 CPU, price vs performance, when it introduced it cannibalized the i3 sales. Intel realized this and slow down the production. Low TDP.

Asus – PRIME B250M-A Micro ATX LGA1151 Motherboard RM 415.00 (Lazada)
-4 DIMM slots and support up to 64GB DDR4. Alternatively, you may consider Gigabyte GA-B250M-D3H.

Avexir Core Series DDR4/2400Mhz/16GB/LED RAM RM 569.00 (Lazada)
Avexir Core Series DDR4/2400Mhz/16GB/LED RAM RM 519.00 (Lazada)
-simply because it is the cheapest. 2x16GB is cheaper than 4X8GB RAM. Furthermore, I have 2 more free DIMMs slots with this configuration.

Corsair – VS 450W ATX Power Supply RM 148.00 (Lelong)
-better than stock PSU

Tecware Quad Mini Cube ATX Case RM 180.00 (Lazada)
-cheap and affordable, importantly it perfectly fit my IKEA rack for space-saving purpose. The size and dimension are resemble the infamous HP Microserver Gen8 (bought it for a year ago then sold it because underutilized, now feeling regretted :P)

Western Digital – Caviar Blue 1TB 3.5″ 7200RPM Internal Hard Drive (Re-Use)

I bought mostly from Lazada due to stock availability and abusing their 10% voucher (price listed above before 10% discount). Price for CPU and RAM is higher due to scarcity and exchange rate.

Power Consumption
Average on 29 watts!!

ESXi running VMs