AWS-Hosted WordPress Ecommerce POC
Building a proof of concept ecommerce site with WordPress, WooCommerce, and AWS services to bridge theory and practice
I spend my days reading research papers about cloud resource allocation and virtual machine placement algorithms. That is important work, and I believe in it. But I have been feeling a growing itch to build something tangible. Something that runs on real cloud infrastructure, serves real traffic, and forces me to make real architectural decisions.
So I built a proof of concept ecommerce site using WordPress and WooCommerce, hosted entirely on AWS. It took about two weeks of evenings and weekends, and I learned more about practical cloud architecture from this project than from any single research paper.
Why WordPress and WooCommerce
I know, I know. WordPress is not exactly cutting-edge technology. In the research lab, we talk about microservices, container orchestration, and distributed consensus algorithms. WordPress is a monolithic PHP application backed by MySQL. It is the opposite of everything we study.
But that is precisely why I chose it. WordPress powers a staggering percentage of the web. WooCommerce is one of the most widely used ecommerce platforms. If I want to understand how real applications behave on cloud infrastructure, I should start with something that represents what people actually deploy, not what researchers wish they would deploy.
Also, I had a practical motivation. A friend back home wants to start an online store selling handmade crafts. He asked me if I could help. Rather than pointing him to Shopify (the easy answer), I saw an opportunity to build something myself and learn AWS in the process.
The Architecture
Here is what I put together.
A single EC2 instance runs WordPress with the WooCommerce plugin. It is a t2.micro, which is part of the AWS free tier, so it costs me nothing for the first year. The instance runs Amazon Linux with Apache, PHP, and the WordPress application code.
The database runs on Amazon RDS, using a MySQL instance. Separating the database from the application server is a fundamental architectural decision. It means I can scale the web tier and the database tier independently. It also means the database benefits from RDS features like automated backups, point-in-time recovery, and multi-AZ failover (though I have not enabled multi-AZ because it doubles the cost and this is a proof of concept).
Static assets, meaning product images, CSS files, JavaScript, and downloadable content, are stored in Amazon S3. I installed a WordPress plugin called W3 Total Cache that offloads static content to S3 and optionally serves it through CloudFront, Amazon's CDN. This reduces the load on the EC2 instance and improves page load times for users who are geographically distant from the server.
Setting It Up
The setup process was more involved than I expected. Let me walk through the major steps.
First, I launched the EC2 instance. Choosing the right AMI (Amazon Machine Image), configuring security groups, setting up key pairs for SSH access. Security groups are essentially firewall rules, and getting them right is important. I opened port 80 for HTTP traffic, port 443 for HTTPS (which I plan to add later), and port 22 for SSH, restricted to my IP address.
Second, I provisioned the RDS instance. This involved choosing the database engine (MySQL 5.6), selecting an instance class (db.t2.micro, also free tier eligible), configuring storage, and setting up the security group to allow connections only from the EC2 instance. That last part is important: the database should never be directly accessible from the internet.
Third, I installed the LAMP stack on the EC2 instance. Apache, PHP, and the MySQL client. Then WordPress, then WooCommerce. The WordPress installation itself is straightforward; the famous "five-minute install" is not an exaggeration. The WooCommerce setup takes longer because you need to configure shipping, payment gateways, tax rules, and product categories.
Fourth, I set up the S3 bucket for static assets and configured W3 Total Cache to use it. This required creating an IAM user with appropriate permissions (read and write access to the specific S3 bucket, nothing else) and configuring the plugin with those credentials.
What I Learned
Several things surprised me during this project.
Network latency between EC2 and RDS matters. Even though both services are in the same AWS region, there is measurable latency on each database query. WordPress is notoriously chatty with its database; a single page load can trigger dozens of queries. I added an object cache using Memcached (via ElastiCache) to reduce the database load, and the improvement was significant. Page load times dropped from about three seconds to under one second for cached pages.
S3 is remarkably reliable but has consistency caveats. When you upload a new object to S3, it is immediately available for reading. But if you overwrite an existing object, the old version might be served for a brief period. For a WordPress site where you occasionally update images, this is usually not a problem. But it is the kind of nuance that matters at scale.
Security is a continuous concern, not a one-time configuration. I spent time hardening the WordPress installation: disabling XML-RPC (a common attack vector), limiting login attempts, changing the default admin username, keeping plugins updated. I also configured AWS CloudTrail to log API calls, so I have an audit trail of who did what to my infrastructure.
Cost management requires attention. Even with free tier services, it is easy to accidentally incur charges. I set up a billing alarm that notifies me if my estimated charges exceed five dollars. AWS provides cost explorer tools, but they require you to actually check them. I have made a habit of reviewing my bill weekly.
Connecting to My Research
This project illuminated aspects of cloud computing that my research addresses from a theoretical angle.
The decision of where to place the database relative to the application server is a simplified version of the workload placement problem I study in the lab. In my POC, the answer was straightforward: put them in the same availability zone to minimize latency. But at scale, with hundreds of services and complex dependency graphs, this becomes the NP-hard bin packing problem we model in our papers.
The auto-scaling challenge is also directly relevant. Right now, my site runs on a single EC2 instance. If my friend's craft store gets featured on social media and traffic spikes, that single instance will fall over. AWS Auto Scaling can add instances behind a load balancer, but configuring the scaling policies requires understanding workload patterns, which is exactly the prediction problem I mentioned in my earlier post about our research.
Even the S3 consistency model connects to the distributed systems literature on CAP theorem and eventual consistency. Amazon chose availability and partition tolerance over strong consistency, and that design decision has practical implications that I experienced firsthand.
Next Steps
The POC works. My friend can browse products, add them to a cart, and go through a checkout flow. I have not connected a real payment gateway yet (that requires a business registration and adds complexity I am not ready for), but the infrastructure is sound.
I want to write a follow-up post with a deeper technical dive into the architecture: the specific AWS configurations, the performance optimizations, and the cost breakdown. I also want to explore what it would take to make this architecture production-ready, with proper monitoring, automated deployments, and disaster recovery.
Building this has reinforced my conviction that good research needs to be grounded in practical experience. Reading about cloud computing is one thing. Running a real application on real cloud infrastructure, debugging real performance problems, and making real cost tradeoffs is something else entirely.