PSP Infrastructure Setup and System Deployment Case Study
22 August, 2024 11 min read
- Deploying the DNS server
- Deploying a local repository for component updates
- Setting up a Docker repository and application control server
- Deploying the database management system
- Setting up application servers and traffic balancing
- Configuring SMTP for email sending
- Configuring antivirus protection
- Setting up centralized log storage
- Configuring system monitoring
- Configuring HIDS/HIPS
- Deploying and configuring document flow system
- Load testing
- Preparing the infrastructure for certification
- Complete list of tasks completed during the PSP deployment
- Conclusion
- Frequently asked questions
Table of contents:
In our previous article, we detailed the progress made during our initial Discovery phase and emphasized some valuable insights we’ve acquired. During that stage, our team diligently gathered and analyzed data, laying the groundwork for the entire project. We also worked to ensure alignment among all stakeholders regarding goals and expectations.
Moving forward, we’re excited to delve into the core stage of Deployment and PSP certification, describing the key steps in detail.
For deploying PSP infrastructure, we first needed to prepare an application-agnostic infrastructure. This involves a set of servers, each performing a specific function in accordance with the requirements of the information security standard PCI DSS.
The industry standard is to build PSP infrastructure using ready-made services from a cloud provider. However, during the Discovery phase, we found that the local regulations in our clientโs jurisdiction mandate that the payment application must be fully deployed within the countryโs territory. This ruled out options like AWS or DigitalOcean.
Given these conditions, our choice of cloud provider was narrowed down to two local options, both of which posed additional challenges compared to a standard setup. This forced us to take the alternative approach and set up each necessary service manually within the infrastructure.
Overall, this infrastructure serves as a PSP infrastructure solution that can grow with increasing demands. However, building this payment service provider infrastructure infrastructure from pre-configured components presents a dual challenge. On one hand, the process seems straightforward as we can predefine the necessary components. On the other hand, it introduces complexities due to the need to tailor solutions to the specific requirements and conditions of the project, as well as account for local nuances.
In this article, we will explore several such tasks that, while not covering the entire project, vividly illustrate the technical challenges encountered during the PSP infrastructure deployment.
Deploying the DNS server
Managing a network of servers inevitably involves the task of identifying devices by their IP addresses, necessitating simple and memorable names to streamline the process. In smaller networks, this can be handled manually by mapping names to IP addresses in the hosts file on each server. However, as the number of servers grows, this method becomes increasingly labor-intensive. Knowing we needed a substantial network from the outset, deploying a DNS server became one of our top priorities.
Initially, we considered using the open-source FreeIPA suite, which offers DNS, NTP, and certificate management functions, among other features. However, we found this suite too cumbersome for our needs. As a result, we chose BIND to implement the DNS server functions, utilized OpenSSL for certificate issuance, and ensured time synchronization using the standard ntpd (Network Time Protocol daemon).
Deploying a local repository for component updates
According to the security requirements of the PCI DSS industry standard, application access to external networks must be restricted. However, to ensure the security and functionality of our solutions, regular updates are essential. To address the need for updates, we deployed a local repository where updated packages from publishers are regularly uploaded. The system administrator installs these packages from the local repository onto the operational servers, ensuring a controlled and secure update process.
A detailed example of using a local repository will be discussed below in the section dedicated to configuring antivirus software.
Setting up a Docker repository and application control server
To avoid getting Docker images from public repositories on the internet, we needed to set up a self-hosted Docker registry. Our experience showed that Harbor is an excellent choice due to its ease of setup and flexibility.
Deploying the database management system
When deploying our Database Management System (DBMS), we initially chose a Master-Slave configuration to ensure fault tolerance and load distribution. On the Slave, we set up backups using Percona XtraBackup. To ensure data safety, we implemented automatic backups and a procedure that encrypts and archives backups older than seven days on a separate storage server.
Sometimes, using a Master-Slave setup, there is a temptation to direct all read requests to the Slave to offload the Master. However, our experience shows that SQL proxies do not provide sufficient stability with this configuration, so all requests are still handled by the Master.
Setting up application servers and traffic balancing
On the primary (active) and secondary (standby) application servers, Docker was installed in rootless mode. The applications are configured to operate in an active-standby mode. To switch traffic between them, we use the NGINX web server with reverse proxy capabilities. This setup allows for both manual and automatic failover, ensuring seamless traffic switching and high availability.
Configuring SMTP for email sending
In our payment provider infrastructure, we deployed a Postfix mail server to handle outgoing emails from our application, such as alerts for administrators or password change notifications for users. This setup is part of our broader payment infrastructure setup, which ensures seamless communication between components. A mail user was created in Postfix, and the application connects to the server via SMTP using TLS with this userโs credentials. The Postfix server is configured to send emails through a Google Mail account, ensuring high delivery reliability and avoiding the need for intricate mail service setup and spam filter issues.
To support this setup, an API password must be created in the Google Mail account, which is only available with two-factor authentication (2FA) enabled. Google requires the account password to be changed every two months with 2FA enabled. Thus, the procedure of changing the Google Mail account password and generating a new API password must be performed every two months.
Configuring antivirus protection
For antivirus protection on our servers, we use the free software package ClamAV. The process of updating virus signature databases exemplifies why we initially deployed a local repository.
Updating databases independently on each server would significantly increase network load. Moreover, ClamAV’s automatic protection on the vendor’s site can interpret multiple requests (in our experience, as few as five) from the same IP address as a threat, leading to an IP block and loss of access to updates.
To ensure security and prevent IP blocking, we decided to use a local repository, which is an essential aspect of how to deploy PSP infrastructure effectively. Virus signature updates are downloaded every two hours, as recommended by the vendor and stored in the repository. All servers then retrieve these updates from the local repository.
Setting up centralized log storage
We configured centralized log storage using Rsyslog, enabling us to manage logs collected from multiple servers without needing to log in to each one individually. The log server receives log entries from all client servers and consolidates them in one place for monitoring, archiving, and analysis if necessary.
Logs are saved in daily archive files, providing convenient access for analyzing the system’s entire operational period.
Configuring system monitoring
We set up system monitoring using Zabbix. This tool tracks the availability and status of resources and monitors various system health parameters, including free space, memory, CPU load, and more. Zabbix sends notifications when issues are detected and provides a web interface with a dashboard to monitor the systemโs overall health and analyze data.
Additionally, we configured an integrated monitoring system for tracking application performance. This system automatically sends email notifications if business or application-level issues arise, such as a decline in payment success rates or an increase in API failures.
Configuring HIDS/HIPS
Host-based Intrusion Detection Systems (HIDS) and Host-based Intrusion Prevention Systems (HIPS) detect unwanted and suspicious activities on servers, such as file changes, failed login attempts, and other anomalies. We selected OSSEC for this task. Notifications about events are automatically sent to a private Telegram chat for the support team, ensuring prompt response to potential threats and anomalies in the infrastructure.
Deploying and configuring document flow system
We chose Redmine for deploying and configuring our document flow system. Despite some drawbacks and a somewhat dated interface, it provides the functionality we need for document management.
Load testing
We conducted load testing after preparing the infrastructure using Gatling. This tool supports various types of load tests and helps evaluate application performance. Metrics obtained from the testing, such as transactions per second (TPS), average response time, and others, allowed us to assess the systemโs stability and efficiency under load.
Preparing the infrastructure for certification
When preparing the infrastructure for certification, we utilized the open-source security scanners openSCAP and Greenbone.
OpenSCAP performs internal analysis of the operating system and installed software, generating reports on detected vulnerabilities. It also offers automatic remediation for identified issues, such as security policy violations. A thorough scan was conducted with this tool before certification, and necessary changes were made based on the generated report.
Greenbone operates as a separate security scanner within the infrastructure, remaining inactive except when triggered for scanning. It scans servers over the internal network, examining ports, installed software, certificates, and other parameters. The scan results are provided in a report that helps identify potential vulnerabilities and security issues.
Complete list of tasks completed during the PSP deployment
During the second phase of the project, we accomplished a range of tasks, including those detailed above. For convenience and brevity, this article highlights only the most significant tasks. Below is a list of tasks included in the infrastructure preparation package, though not exhaustive:
- Ensuring administrative access only from the console server.
- Deploying DNS server.
- Deploying a local repository for component updates.
- Configuring Docker repository.
- Setting up the application control server.
- Deploying DBMS.
- Configuring application server 1.
- Configuring application server 2.
- Setting up traffic balancing between application servers.
- Deploying the application.
- Deploying software security module.
- Configuring the deployed application.
- Setting up SFTP.
- Configuring SMTP.
- Configuring antivirus.
- Configuring centralized log storage.
- Setting up HIDS/HIPS.
- Configuring system monitoring.
- Configuring application monitoring.
- Deploying and configuring document flow system.
- Conducting load testing.
- Preparing the infrastructure for certification.
Conclusion
In this article, we have outlined the extensive tasks involved in the deployment phase of our project, detailing the critical steps taken to prepare our infrastructure for certification. From setting up essential systems like DNS and local repositories to configuring advanced security measures and conducting thorough load testing, each task was executed with precision to ensure a robust and reliable environment.
Our approach included not only the technical PSP infrastructure setup but also strategic planning, such as implementing centralized log storage, configuring HIDS/HIPS, and preparing for future scalability. These efforts collectively contribute to a secure, efficient, and well-managed infrastructure.
With the infrastructure ready for certification, the second phase of PSP system deployment was completed. In the forthcoming article, we will delve into the branding and integration processes with acquirers, highlighting the steps taken to ensure a seamless transition and optimal performance.
Frequently asked questions
What is the purpose of using a local repository for updates?
A local repository reduces network load by centralizing updates and prevents potential issues with IP blocking from repeated requests to external sources. It also ensures that all servers receive updates in a controlled manner.
Why did you choose Harbor for the Docker registry instead of other options?
Harbor was selected for its ease of setup and flexibility, which suited our requirements for managing Docker images effectively. It provided the necessary functionality without the complexity of other solutions.
How does the Master-Slave configuration benefit your database setup?
The Master-Slave configuration enhances fault tolerance and load distribution. It ensures that if the Master fails, the Slave can take over, and it balances the load by handling read requests separately.
What are the advantages of using Zabbix for system monitoring?
Zabbix offers comprehensive monitoring capabilities, including real-time status tracking, alerting, and a detailed web interface for analysis. This helps in maintaining system health and responding promptly to issues.
How do you handle the regular updates of antivirus databases?
We use a local repository to manage antivirus database updates, which are downloaded every two hours. This setup minimizes network load and avoids potential issues with IP blocking from repeated requests.
What role does openSCAP play in your certification preparation?
OpenSCAP performs internal security scans of the operating system and software, identifying vulnerabilities and allowing automatic remediation of security policy violations, which is crucial for meeting certification requirements.
Why is it important to use both HIDS and HIPS?
HIDS detects suspicious activities on the server level, while HIPS not only detects but also prevents potential threats. Using both systems provides a layered approach to security, enhancing overall protection.
How do you ensure the stability of your application servers during traffic spikes?
We use traffic balancing with NGINX to distribute the load between application servers. This setup helps maintain stability and performance by evenly distributing incoming traffic and handling failover seamlessly.
Want to know more about prices and services at each stage?
Check out our turnkey PSP setup package – the easiest way to start your business