I’ve a db server (let’s name it DB) on one other cloud service and a VPN server operating wireguard on Amazon AWS (let’s name it GW), an EC2 occasion. I even have an online server as an EC2 occasion (let’s name it WEB).
I am a whole noob to AWS companies. My networking setup comprises the next:
- A VPC containing two subnets, one public (let’s name it PUB), one personal (let’s name it PVT).
- An web gateway on the PUB subnet
- An Elastic IP hooked up to one in all GW’s community interface
The GW occasion has two community interfaces:
- one on the PUB subnet (10.25.0.2/24) with the EIP attributed
- one on the PVT subnet (10.25.240.2/24)
The WEB occasion has one community interface (10.25.240.50/24).
Each have personal IPv4, solely the GW has a public IPv4, and each have IPv6, however I am specializing in establishing the IPv4 first, so let’s ignore the IPv6 setup.
There is a Wireguard tunnel established between DB and GW with the next setup:
- GW: wg0, 192.168.40.1/24
- DB: wg0, 192.168.40.2/24
Each DB and GW ping one another via the tunnel, and each GW and WEB ping one another via personal subnet interfaces. I did a “permit all the pieces” Safety Group for each cases on the interface that talk with one another as a result of I suspected it could possibly be an issue.
All cases run Linux and GW has sys.internet.ipv4.ip_forward
sysctl possibility set to 1.
I attempted disabling my firewall (firewalld), I attempted creating insurance policies for inter-zone visitors ahead, I attempted all the pieces, however packets from DB merely will not arrive at WEB (they do go away GW, although) and packets from WEB merely will not arrive at GW.
I examined with ICMP packets operating tcpdump
, packets destined to WEB from DB arrive on the tunnel interface, and they’re despatched to the wire into the personal subnet from dumping the personal subnet’s interface, however tcpdump
on the WEB occasion would not present something arriving). Additionally, packets from the WEB destined to DB are captured on the WEB community interface, however will not seem on the GW interface in any respect.
DB routing desk:
default by way of 10.1.1.1 dev eth0 proto dhcp src 10.1.1.149 metric 100
10.1.1.0/24 dev eth0 proto kernel scope hyperlink src 10.1.1.149 metric 100
10.25.240.0/24 dev wg0 scope hyperlink
192.168.40.0/24 dev wg0 proto kernel scope hyperlink src 192.168.40.2
(the path to 10.25.240.0/24 was produced by wireguard’s AllowedIPs)
WEB Routing Desk
default by way of 10.25.240.1 dev eth0 proto dhcp src 10.25.240.50 metric 100
10.25.240.0/24 dev eth0 proto kernel scope hyperlink src 10.25.240.50 metric 100
192.168.40.0/24 by way of 10.25.240.2 dev eth0
(the path to 192.168.40.0/24 was manually added to NetworkManager config)
That mentioned, I’ve a twofold query:
- Basically, how would I strategy this type of state of affairs to diagnose the difficulty when working with AWS stuff?
- In particular, what could possibly be the doable trigger and the doable options for this situation?
I’ve a db server (let’s name it DB) on one other cloud service and a VPN server operating wireguard on Amazon AWS (let’s name it GW), an EC2 occasion. I even have an online server as an EC2 occasion (let’s name it WEB).
I am a whole noob to AWS companies. My networking setup comprises the next:
- A VPC containing two subnets, one public (let’s name it PUB), one personal (let’s name it PVT).
- An web gateway on the PUB subnet
- An Elastic IP hooked up to one in all GW’s community interface
The GW occasion has two community interfaces:
- one on the PUB subnet (10.25.0.2/24) with the EIP attributed
- one on the PVT subnet (10.25.240.2/24)
The WEB occasion has one community interface (10.25.240.50/24).
Each have personal IPv4, solely the GW has a public IPv4, and each have IPv6, however I am specializing in establishing the IPv4 first, so let’s ignore the IPv6 setup.
There is a Wireguard tunnel established between DB and GW with the next setup:
- GW: wg0, 192.168.40.1/24
- DB: wg0, 192.168.40.2/24
Each DB and GW ping one another via the tunnel, and each GW and WEB ping one another via personal subnet interfaces. I did a “permit all the pieces” Safety Group for each cases on the interface that talk with one another as a result of I suspected it could possibly be an issue.
All cases run Linux and GW has sys.internet.ipv4.ip_forward
sysctl possibility set to 1.
I attempted disabling my firewall (firewalld), I attempted creating insurance policies for inter-zone visitors ahead, I attempted all the pieces, however packets from DB merely will not arrive at WEB (they do go away GW, although) and packets from WEB merely will not arrive at GW.
I examined with ICMP packets operating tcpdump
, packets destined to WEB from DB arrive on the tunnel interface, and they’re despatched to the wire into the personal subnet from dumping the personal subnet’s interface, however tcpdump
on the WEB occasion would not present something arriving). Additionally, packets from the WEB destined to DB are captured on the WEB community interface, however will not seem on the GW interface in any respect.
DB routing desk:
default by way of 10.1.1.1 dev eth0 proto dhcp src 10.1.1.149 metric 100
10.1.1.0/24 dev eth0 proto kernel scope hyperlink src 10.1.1.149 metric 100
10.25.240.0/24 dev wg0 scope hyperlink
192.168.40.0/24 dev wg0 proto kernel scope hyperlink src 192.168.40.2
(the path to 10.25.240.0/24 was produced by wireguard’s AllowedIPs)
WEB Routing Desk
default by way of 10.25.240.1 dev eth0 proto dhcp src 10.25.240.50 metric 100
10.25.240.0/24 dev eth0 proto kernel scope hyperlink src 10.25.240.50 metric 100
192.168.40.0/24 by way of 10.25.240.2 dev eth0
(the path to 192.168.40.0/24 was manually added to NetworkManager config)
That mentioned, I’ve a twofold query:
- Basically, how would I strategy this type of state of affairs to diagnose the difficulty when working with AWS stuff?
- In particular, what could possibly be the doable trigger and the doable options for this situation?