Provides a quick-start example of using CockroachDB and springBoot with Banking structures.
Digital Banking uses an API microservices approach to enable high speed requests for account, customer and transaction
information. As seen below, this data is useful for a variety of business purposes in the bank.

- Overview
- CockroachDB Advantages for Digital Banking
- Requirements
- Links that help
- Technical Overview
- Using Docker
- Deploying java app locally
- Using Terraform on Azure
- Investigate the APIs
- Deploy to 2 regions
In this github, a java spring boot application is run through a jar file to support typical API calls to a CockroachDB banking data layer. A CockroachDB docker configuration is included.
- CockroachDB easily handles high write transaction volume
- CockroachDB Enterprise scales vertically (large nodes) and horizontally (many nodes) in a distributed manner
- CockroachDB is wire compatible with PostgreSQL for easy/familiar development
- CockroachDB has SQL compatible and performant indexing capabilities lacking in noSQL tools
- CockroachDB can do upgrades and schema modifications without downtime (RTO and RPO of zero)
- CockroachDB integrates easily with Kafka and Spring JPA
- Docker installed on your local system, see Docker Installation Instructions.
- Alternatively, can run CockroachDB Enterprise and set the CockroachDB host and port in the application.properties file
- This terraform and ansible github can be used to deploy this application to AZURE
- When using Docker for Mac or Docker for Windows, the default resources allocated to the linux VM running docker are 2GB RAM and 2 CPU's. Make sure to adjust these resources to meet the resource requirements for the containers you will be running. More information can be found here on adjusting the resources allocated to docker.
- NOTE: seemed very difficult to change the swagger ui port from 8080 so left application and swagger ui port at 8080
- this conflicts with typcial port 8080 configuration of CockroachDB http port of 8080-must change out port 8080 Docker for mac Docker for windows
- Spring Data for Kafka
- spring data Reference in domain
- spring async tips
- swagger-ui with spring
- Spring Data JPA
- Spring Boot PostgreSQL CRUD example
- Spring Boot PostgreSQL CRUD github
- Cockroachlabs University sample movr application
- Cockroachlabs Enterprise license key
- Hibernate inheritance stack overflow
This github java code uses Spring Data JPA using the Java Persistence API repositories.
This is basic spring links
- Spring CockroachDB
- controller-http API call interfaces. Separate controller for each table api
- domain-has each of the java objects with their columns. Enables all the getter/setter methods
- exception-exception handling code
- repository-has repository definitions. Crud operations defined here as well as specific Query statements
- service-service layer for the controller to interact with the repository
- util-utility functions
The java code demonstrates common API actions with the data layer in CockroachDB. The java spring Boot framework minimizes the amount of code to build and maintain this solution. Maven is used to build the java code and the code is deployed to the tomcat server.
This option uses docker to support all of the non-application components (kafka, CockroachDB) with the java application running on the local mac.
- Prepare Docker environment-see the Prerequisites section above...
- Pull this github into a directory
git clone https://github.com/jphaugla/Digital-Banking-CockroachDB.git- Refer to the notes for CockroachDB Docker images used but don't get too bogged down as docker compose handles everything except for a few admin steps on tomcat.
- CockroachDB stack docker instructions
- Open terminal and change to the github home where you will see the docker-compose.yml file and bring up docker
- Instead of the single node cockroachDB in docker-compose.yml, a 3 node cockroachDB with haproxy in docker-compose-3-crdb.yml can be used
docker-compose -f docker-compose-kafka.yml -f docker-compose.yml up -d- NOTE-the addition of the username has been added to the docker-compose and is no longer needed
- SKIP Add the jhaugland user name and give it full permissions
cockroach sql --insecure
> create user jhaugland;
> grant all on database defaultdb to jhaugland;- ensure maven and java are deployed on the local machine
- have been running with java 17 or java 18 but other versions should work as well
- have been running with maven 14.2.1
- Set up the environment and run the java application locally
- edit the environment file to use localhost for non-application components
- source this environment file
- run the application.
source scripts/setEnv.sh
mvn clean package
java -jar target/cockroach-0.0.1-SNAPSHOT.jar- Add the jdbc sink to get data into cockroachDB from kafka
-
- This is the shell script createCockroachTransform.sh
cd Digital-Banking-CockroachDB/scripts
./createCockroachTransform.sh- ensure the parameter dokafka in saveTransaction.sh is set to true if using kafka
cd scripts/transaction
./saveTransaction.sh- check kafka control center to ensure data is flowing in going to localhost:9021/
Will see

- click anywhere on the large controlcenter.cluster rectangle
Will see

- in the leftmost tab, click on the Connect button
- in the main tabular, click on connect-default to see the cockroach-sink-json-transform is running

- should see one row in the transaction table
docker exec -it crdb1 bash
cockroach sql --insecure
select * from transactionContinue testing with Processing Larger Record set
- Use this github to deploy all of the components (including the application)
git clone https://github.com/jphaugla/crdb-terraform-ansible- Check the readme for the details on deploying this github including the cloning the github and working with Azure. Completely deploy the terraform github for all deployments. This will also deploy this github inside the app node. The later application deployment instructions will be deployed within the app node using ssh
- maven and java will be installed by the ansible jobs for the app node
- CockroachDB cdc-sink can be used with this github for 2 DC cockroach deployments
- see instructions for 2 region here
- the ip information is shared in a subdirectory for the region under temp directory within the terraform/ansible repository. Go to the files here to see private (internal) and public (external) kafka node and test node IP addresses. The CockroachDB internal and external database connection dns names are also available. These dns names will also give an internal and external CockroachDB enterprise node IP.
- NOTE: this compiling and starting of the application step has been automated in terraform so only for debug/understanding
- Can skip forward to Test application
- log into the app node using the app node IP and the ssh key defined in test/main.tf and go to github home
cd crdb-terraform-ansible/provisioners/temp/<region name>
ssh -i path_to_ssh_file adminuser@`cat app_external_ip.txt`
cd /opt/Digital-Banking-CockroachDB- edit the environment file using only the internal connection addresses. NOTE: kafka will only connect from local azure IP addresses and not any public IP addresses. Using public and private Kafka addresses is possible but not configured currently
- These steps can all be done from client machine local browser using the kafka node public IP address and port 9021. http://172.172.133.201:9021/ From this home screen, pause the currently running connectors: datagen-transactions and cockroach-sink-json using the Kafka Control Center. This will just remove the noise of a second application running.
- Also drop the cockroachDB transaction table
# connect to the app node
cd crdb-terraform-ansible/provisioners/temp/<region name>
ssh -i path_to_ssh_file adminuser@`cat app_external_ip.txt`
# connect to cockroach using private address for the haproxy
cockroach-sql --host=192.168.3.102 --certs-dir=certs --user jhaugland
>drop table transaction;- CockroachDB tables will be created automatically by Spring Data JPA
- start the application after logging in to the appnode
cd crdb-terraform-ansible/provisioners/temp/<region name>
ssh -i path_to_ssh_file adminuser@`cat app_external_ip.txt`
cd /opt/Digital-Banking-CockroachDB
mvn clean package
# edit scripts/setEnv.sh for current nodes - CockroachDB_HOST, CockroachDB_PORT, and KAFKA_HOST must all change to match current environment. *IMPORTANT* only use private/internal IP addresses-DO NOT USE *localhost*. Additional note, CockroachDB password is different in local docker version and in ansible created version-verify CockroachDB password!
source scripts/setEnv.sh
java -jar target/cockroachDB-0.0.1-SNAPSHOT.jar- get a second terminal window to the app node and write a test message to kafka-this will cause the topic to be created. Name can be changed in application.properties but default topic name is transactions
cd crdb-terraform-ansible/provisioners/temp/<region name>
ssh -i path_to_ssh_file adminuser@`cat app_external_ip.txt`
cd /opt/Digital-Banking-CockroachDB/scripts/transaction
# make sure saveTransaction script says doKafka=true
./saveTransaction.sh- verify transactions topic is created using kafka control center
- in control center click on topics and then on the topics page, click messages
- if you run saveTransaction.sh again while looking at the control center topic pane, the message will be visible. If you put offset of 0, both messages will be visible.
- application will create the kafka topic on first usage of the topic.
- Call kafka API to create the CockroachDBSink using provided script. DO THIS FROM your local Mac
- NOTE: This has been automated in the ansible script so just verify from Kafka that this cockroach-sink-json-transform sink is created
cd /opt/Digital-Banking-CockroachDB/scripts
# change localhost to the external/public ip address for the kafka node in the last line.
# Make sure this is the public kafka IP and not the private
# Verify the CockroachDB.uri and CockroachDB.password. (the CockroachDB.uri must be INTERNAL haproxy IP)
./createCockroachTransformApp.sh
ssh -i ~/.ssh/<sshkey> CockroachDBlabs@<appnode public ip>
cd transaction
./saveTransaction.shverify data flowed in to CockroachDB back on the application node
# connect to the app node
ssh -i ~/.ssh/<sshkey> adminuser@<appnode public ip>
# connect to cockroach using private address for the haproxy
# must be in /home/adminuser to use the certs dir
cockroach-sql --host=192.168.3.102 --certs-dir=certs --user jhaugland
>select * from transaction;
```bash
# on local mac
cd Digital-Banking-CockroachDB/scripts
# change localhost to the public ip address for the kafka node in the last line.
# Set the contactPoints to the local IP address for the cockroach node.
ssh -i ~/.ssh/<sshkey> adminusers@<appnode public ip>
./transaction/saveTransaction.shverify generateData.sh says doKafkfa=true
./scripts/generateData.sh- Will see large number of records now in CockroachDB
The hostname here is for the public IP of the application node or it is localhost if running on docker
scripts are in ./scripts. Adding the CockroachDB search queries behind each script here also...
- addTag.sh - add a tag to a transaction. Tags allow user to mark transactions to be in a buckets such as Travel or Food for budgetary tracking purposes
- deleteCustomer.sh - delete all customers matching a string
- generateData.sh - simple API to generate default customer, accounts, merchants, phone numbers, emails and transactions
- generateLots.sh - for server testing to generate higher load levels. Use with startAppservers.sh. Not for use with docker setup. This is load testing with CockroachDB enterprise and client application running in same network in the cloud.
- getByAccount.sh - find transactions for an account between a date range
- getByCreditCard.sh - find transactions for a credit card between a date range
- getByCustID.sh - retrieve transactions for customer
- getByEmail.sh - retrieve customer record using email address
- getByMerchant.sh - find all transactions for an account from one merchant for date range
- getByMerchantCategory.sh - find all transactions for an account from merchant category for date range
- getByNamePhone.sh - get customers by phone and full name.
- getByPhone.sh - get customers by phone only
- getByStateCity.sh - get customers by city and state
- getByZipLastname.sh - get customers by zipcode and lastname.
- getReturns.sh - get returned transactions count by reason code
- getTags.sh - get all tags on an account
- getTaggedAccountTransactions.sh - find transactions for an account with a particular tag
- getTransaction.sh - get one transaction by its transaction ID
- getTransactionStatus.sh - see count of transactions by account status of PENDING, AUTHORIZED, SETTLED
- putCustomer.sh - put a set of json customer records
- saveAccount.sh - save a sample account
- saveCustomer.sh - save a sample customer
- saveTransaction.sh - save a sample Transaction
- startAppservers.sh - start multiple app server instances for load testing
- updateTransactionStatus.sh - generate new transactions to move all transactions from one transaction Status up to the next transaction status. Parameter is target status. Can choose SETTLED or POSTED. Will move 100,000 transactions per call
- putDispute.sh - put the dispute specified in dispute.sh
- disputeReasonCode.sh - set the dispute reason code
- disputeAccept.sh - accept the dispute
- disputeResolved.sh - charge back the dispute
- Use the terraform/ansible deployment using the subdirectories region1 and region2 in the deployment github
- Can disable deployment of Kafka by setting the include_ha_proxy flag to "no" in deploy main.tf
- Ensure install_cdc_sink flag and create_cdc_sink flag are set to true in main.yml
- Ensure install_enterprise_keys is set in both region1 and region2
- Run terraform apply in each region directory
export TF_VAR_cluster_organization={CLUSTER ORG}
export TF_VAR_enterprise_license={LICENSE}
git clone https://github.com/nollenr/crdb-terraform-ansible.git
cd crdb-terraform-ansible/region1
terraform init
terraform apply
cd crdb-terraform-ansible/region2
terraform init
terraform apply- This will deploy this Digital-Banking-CockroachDB github into the application node with connectivity to cockroachDB.
Additionally, cdc-sink is deployed and running on the application node also with connectivity to haproxy and cockroachDB in the same region- Ensure cdc-sink is running on each of the region application nodes
cd ~/crdb-terraform-ansible/provisioners/temp/{region_name}
ssh -i path_to_ssh_file adminuser@`cat app_external_ip.txt`
ps -ef |grep cdc-sink
# if it is not running, start it
cd /opt/cdc-sink-linux-amd64-master
./start.sh- NOTE: this compiling and starting of the application step has been automated in terraform so only for debug/understanding
- The java application needs to be started manually on the application node for each region. Set up the environment file
- the ip addresses can be found in a subdirectory under temp for each deployed region
- Make sure to set the COCKROACH_HOST environment variable to the private IP address for the haproxy node
- Uncomment the COCKROACH_URL line with sslmode=verify-full
- Comment out the line with sslmode=disable
- Uncomment the COCKROACH_DB_PASS line to provide a password for the connection
- If using kafka, KAFKA_HOST should be set to the internal IP address for kafka
- set the REGION to the correct region
- do on each region
cd ~/crdb-terraform-ansible/provisioners/temp/{region_name}
ssh -i path_to_ssh_file adminuser@`cat app_external_ip.txt`
cd /opt/Digital-Banking-CockroachDB
# edit scripts/setEnv.sh as documented above
source scripts/setEnv.sh
mvn clean package
java -jar target/cockroach-0.0.1-SNAPSHOT.jar- The necessary manual step is to deploy a CockroachDB Changefeed across the regions to make active/active cdc-sink between the two otherwise independent regions
- Port 30004 is open on both regions to allow the changefeed to communicate with the application server on the other region
- Start the changefeed on each side with changfeed pointing to the other sids's application external IP address
- The changefeed script is written on each of the cockroach database nodes by the terraform script. Login to any of the cockroach
nodes using the IP address in temp for each deployed region.
- As previously mentioned, the changefeed script must be modified to point to the application external IP address for the other region
- this is the step that reaches across to the other region as everything else is within region boundaries
- IMPORTANT NOTE: Must have enterprise license for the changefeed to be enabled
cd ~/crdb-terraform-ansible/provisioners/temp/{region_name}
ssh -i path_to_ssh_file adminuser@`cat crdb_external_ip{any ip_address}`
# edit create-changefeed.sh putting the app node external IP address for the other region in the create changefeed statement
# where the webhook-https address is specified. The host remains unchanged as localhost
cockroach sql --host=localhost --certs-dir=certs
SET CLUSTER SETTING cluster.organization = 'Acme Company';
SET CLUSTER SETTING enterprise.license = 'xxxxxxxxxxxx';
exit
vi create-changefeed.sh
./creete-changefeed.shVerify rows are flowing across from either region by running the test application steps

