Categories
microservices spring-boot

crud operations in spring boot with Mysql

In this tutorial, we will learn crud operations in spring boot with Mysql

If you want to learn more about connecting MySQL from spring boot, please follow this link https://beginnersbug.com/connect-mysql-database-from-spring-boot/

What you’ll learn

End of this tutorial, you will learn crud operations in spring boot with Mysql

Save syntax
// Here studentsDao is the repository interface
studentsDao.save(entity);
Select all syntax
// Here studentsDao is the repository interface
studentsDao.findAll();
Select by id syntax
// Here studentsDao is the repository interface
studentsDao.findById(id);
Delete all syntax
// Here studentsDao is the repository interface
studentsDao.deleteAll();		
Delete by id syntax
// Here studentsDao is the repository interface
studentsDao.deleteById(id);
Database Scripts
CREATE TABLE students (
    id int NOT NULL,
	firstname varchar(255) NOT NULL,
    lastname varchar(255) NOT NULL,    
    department int,
    PRIMARY KEY (id)
);
Model Class
import java.io.Serializable;

import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.NamedQuery;
import javax.persistence.Table;

/**
 * The persistent class for the students database table.
 * 
 */
@Entity
@Table(name = "students")
@NamedQuery(name = "Student.findAll", query = "SELECT s FROM Student s")
public class Student implements Serializable {

	private static final long serialVersionUID = 1L;

	@Id
	@GeneratedValue(strategy = GenerationType.IDENTITY)
	private Long id;

	private String department;

	private String firstname;

	private String lastname;

	public Student() {
	}

	public Long getId() {
		return id;
	}

	public void setId(Long id) {
		this.id = id;
	}

	public String getDepartment() {
		return this.department;
	}

	public void setDepartment(String department) {
		this.department = department;
	}

	public String getFirstname() {
		return this.firstname;
	}

	public void setFirstname(String firstname) {
		this.firstname = firstname;
	}

	public String getLastname() {
		return this.lastname;
	}

	public void setLastname(String lastname) {
		this.lastname = lastname;
	}

}
JpaRepository

Here I am using JpaRepository to achieve CRUD Operations easily. JpaRepository have inbuilt function like findAll,findById,save,delete,deleteById

StudentDao

You need to extend JpaRepository in your dao class with Entity class as like below

Here my Entity class is Students.java and Primary Key variable is Long

import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.stereotype.Repository;

import com.beginnersbug.student.model.Students;

@Repository
public interface StudentDao extends JpaRepository<Students, Long> {

}
Controller Class
import java.util.List;
import java.util.Optional;

import javax.validation.Valid;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

import com.beginnersbug.studentservice.dao.StudentDao;
import com.beginnersbug.studentservice.model.Student;

@RestController()
@RequestMapping("/api/student")
public class StudentController {

	@Autowired
	StudentDao studentsDao;

	@RequestMapping(method = RequestMethod.GET)
	public List<Student> getStudentsList() {
		return studentsDao.findAll();
	}

	@RequestMapping(value = "/{id}", method = RequestMethod.GET)
	public Student getStudent(@PathVariable("id") String id) {
		Optional<Student> findById = studentsDao.findById(Long.parseLong(id));
		return findById.get();
	}

	@RequestMapping(method = RequestMethod.POST)
	public ResponseEntity<String> addUser(@RequestBody Student student) {
		studentsDao.save(student);
		return new ResponseEntity<String>("Student Created Successfully", HttpStatus.CREATED);

	}

	@RequestMapping(method = RequestMethod.PUT)
	public ResponseEntity<String> updateUser(@Valid @RequestBody Student student) {
		Student updatedStudent = studentsDao.findById(student.getId()).get();
		updatedStudent.setFirstname(student.getFirstname());
		updatedStudent.setLastname(student.getLastname());
		studentsDao.save(updatedStudent);
		return new ResponseEntity<String>("Student Updated Sucessfully ", HttpStatus.NO_CONTENT);

	}

	@RequestMapping(method = RequestMethod.DELETE)
	public ResponseEntity<String> deleteAllStudents() {
		studentsDao.deleteAll();
		return new ResponseEntity<String>("Student deleted ", HttpStatus.NO_CONTENT);
	}

	@RequestMapping(value = "/{id}", method = RequestMethod.DELETE)
	public ResponseEntity<String> deleteStudent(@PathVariable("id") String id) {
		studentsDao.deleteById(Long.parseLong(id));
		return new ResponseEntity<String>("Student deleted ", HttpStatus.NO_CONTENT);
	}

}

In the above controller class, We are calling findAll, findById, save, delete, deleteById methods for CRUD Operations

Advantage of using JpaRepository

you don’t need to write any query or any methods, all are inbuilt in JpaRepository class

Exceptions

java.sql.SQLException: Field ‘id’ doesn’t have a default value

You will have chance to get above exception while implementing. Make sure below things

You should have below annotation in the model class @GeneratedValue(strategy=GenerationType.IDENTITY)

And also make sure your table has primary key & auto_increment parameter

org.hibernate.id.IdentifierGenerationException: ids for this class must be manually assigned before calling save()

Make sure your model class have <span class="token annotation punctuation">@id</span> and <span class="token annotation punctuation">@GeneratedValue</span><span class="token punctuation">(</span>strategy <span class="token operator">=</span> <span class="token class-name">GenerationType</span><span class="token punctuation">.</span>IDENTITY<span class="token punctuation">)</span> 

Github

https://github.com/rkumar9090/student-service

Related Articles

connect MySQL database from spring boot

Categories
pyspark

window function in pyspark with example

In this post, We will learn about window function in pyspark with example.

What is window function ?

Window function in pyspark acts in a similar way as a group by clause in SQL.

It basically groups a set of rows based on the particular column and performs some aggregating function over the group.

Sample program for creating dataframe

For understanding the concept better, we will create a dataframe containing the salary details of some employees using the below program.

# Creating dictionary with employee and their salary details 
dict1=[{"Emp_id" : 123 , "Dep_name" : "Computer"  , "Salary" : 2500 } , {"Emp_id" : 456 ,"Dep_name"  :"Economy" , "Salary" : 4500} , {"Emp_id" : 789 , "Dep_name" : "History" , "Salary" : 6700 } , {"Emp_id" : 564 , "Dep_name" : "Computer" , "Salary" : 1400 } , {"Emp_id" : 987 , "Dep_name" : "History" , "Salary" : 3450 }, {"Emp_id" :678 , "Dep_name" :"Economy" ,"Salary": 6700}]
# Creating RDD from the dictionary created above
rdd1=sc.parallelize(dict1)
# Converting RDD to dataframe
df1=rdd1.toDF()
print("Printing the dataframe df1")
df1.show()
Printing the dataframe df1
+--------+------+------+
|Dep_name|Emp_id|Salary|
+--------+------+------+
|Computer|   123|  2500|
| Economy|   456|  4500|
| History|   789|  6700|
|Computer|   564|  1400|
| History|   987|  3450|
| Economy|   678|  6700|
+--------+------+------+
How to use window function in our program?

In the below segment of code, the window function used to get the sum of the salaries over each department.

The following library is required before executing the code.

from pyspark.sql import Window

partitionBy includes the column name based on which the grouping needs to be done.

df = df1.withColumn("Sum",sum('Salary').over(Window.partitionBy('Dep_name')))
print("Printing the result")
df.show()
Printing the result
+--------+------+------+-----+
|Dep_name|Emp_id|Salary|  Sum|
+--------+------+------+-----+
|Computer|   123|  2500| 3900|
|Computer|   564|  1400| 3900|
| History|   789|  6700|10150|
| History|   987|  3450|10150|
| Economy|   456|  4500|11200|
| Economy|   678|  6700|11200|
+--------+------+------+-----+
Other aggregate functions

As above, we can do for all the other aggregate functions also. Some of those aggregate functions are max(),Avg(),min(),collect_list().

Below are the few examples of those aggregate functions.

window function with some other aggregate functions

Reference

https://medium.com/@rbahaguejr/window-function-on-pyspark-17cc774b833a

Window function in pyspark with example using advanced aggregate functions like row_number(), rank(),dense_rank() can be discussed in our other blogs .

Categories
pyspark

Left-anti and Left-semi join in pyspark

In this post, We will learn about Left-anti and Left-semi join in pyspark dataframe with examples.

Sample program for creating dataframes

Let us start with the creation of two dataframes . After that we will move into the concept of Left-anti and Left-semi join in pyspark dataframe.

# Creating two dictionaries with Employee and Department details
dict=[{"Emp_id" : 123 , "Emp_name" : "Raja" },{"Emp_id" : 234 , "Emp_name" : "Sindu"},{"Emp_id" : 456 , "Emp_name" : "Ravi"}]
dict1=[{"Emp_id" : 123 , "Dep_name" : "Computer" } , {"Emp_id" : 456 ,"Dep_name"  :"Economy"} , {"Emp_id" : 789 , "Dep_name" : "History"}]
# Creating RDDs from the above dictionaries using parallelize method
rdd=sc.parallelize(dict)
rdd1=sc.parallelize(dict1)
# Converting RDDs to dataframes 
df=rdd.toDF()
df1=rdd1.toDF()
print("Printing the first dataframe")
df.show()
print("Printing the second dataframe")
df1.show()
Printing the first dataframe
+------+--------+
|Emp_id|Emp_name|
+------+--------+
|   123|    Raja|
|   234|   Sindu|
|   456|    Ravi|
+------+--------+
Printing the second dataframe
+--------+------+
|Dep_name|Emp_id|
+--------+------+
|Computer|   123|
| Economy|   456|
| History|   789|
+--------+------+
What is Left-anti join ?

In order to return only the records available in the left dataframe . For those does not have the matching records in the right dataframe, We can use this join.

We could even see in the below sample program . Only the columns from the left dataframe will be available in Left-anti and Left-semi . And not all  the columns from both the dataframes as in other types of joins.

Sample program – Left-anti join

Emp_id: 234 is only available in the left dataframe and not in the right dataframe.

# Left-anti join between the two dataframes df and df1 based on the column Emp_id
df2=df.join(df1,['Emp_id'], how = 'left_anti')
print("Printing the result of left-anti below")
df2.show()
Printing the result of left-anti below
+------+--------+
|Emp_id|Emp_name|
+------+--------+
|   234|   Sindu|
+------+--------+
What is Left-semi join?

The common factors between the two dataframes is listed down in this join.

In the below sample program, two Emp_ids -123,456 are available in both the dataframes and so they picked up here.

Sample program – Left-semi join
# Left-semi join between two dataframes df and df1
df3=df.join(df1,['Emp_id'], how = 'left_semi')
print("Printing the result of left-semi below")
df3.show()
Printing the result of left-semi below
+------+--------+
|Emp_id|Emp_name|
+------+--------+
|   123|    Raja|
|   456|    Ravi|
+------+--------+

Other types of join are outer join  and inner join in pyspark 

Reference

https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=join#pyspark.sql.DataFrame.join

Categories
pyspark

Outer join in pyspark dataframe with example

In this post , we will learn about outer join in pyspark dataframe with example .

If you want to learn Inner join refer below URL

There are other types of joins like inner join , left-anti join and left semi join

What you will learn

At the end of this tutorial, you will learn Outer join in pyspark dataframe with example

Types of outer join

Types of outer join in pyspark dataframe are as follows :

  • Right outer join / Right join 
  • Left outer join / Left join
  • Full outer join /Outer join / Full join 
Sample program for creating two dataframes

We will start with the creation of two dataframes before moving into the topic of outer join in pyspark dataframe .

#Creating dictionaries
dict=[{"Emp_id" : 123 , "Emp_name" : "Raja" },{"Emp_id" : 234 , "Emp_name" : "Sindu"},{"Emp_id" : 456 , "Emp_name" : "Ravi"}]
dict1=[{"Emp_id" : 123 , "Dep_name" : "Computer" } , {"Emp_id" : 456 ,"Dep_name"  :"Economy"} , {"Emp_id" : 789 , "Dep_name" : "History"}]
# Creating RDDs from the above dictionaries using parallelize method
rdd=sc.parallelize(dict)
rdd1=sc.parallelize(dict1)
# Converting RDDs to dataframes 
df=rdd.toDF()
df1=rdd1.toDF()
print("Printing the first dataframe")
df.show()
print("Printing the second dataframe")
df1.show()
Printing the first dataframe
+------+--------+
|Emp_id|Emp_name|
+------+--------+
|   123|    Raja|
|   234|   Sindu|
|   456|    Ravi|
+------+--------+
Printing the second dataframe
+--------+------+
|Dep_name|Emp_id|
+--------+------+
|Computer|   123|
| Economy|   456|
| History|   789|
+--------+------+
What is Right outer join ?

The Right outer join helps us to get the entire records from the right dataframe along with the matching records from the left dataframe .

And will be populated with null for the remaining unmatched columns of the left dataframe.

Sample program – Right outer join / Right join

Within the join syntax , the type of join to be performed will be mentioned as right_outer or right .

As Emp_name for Emp_id : 789 is not available in the left dataframe , it is populated with null in the following result .

# Right outer join / Right join 
df2=df.join(df1,['Emp_id'], how = 'right_outer')
print("Printing the result of right outer / right join")
df2.show()
# Printing the result of right outer / right join 
+------+--------+--------+
|Emp_id|Emp_name|Dep_name|
+------+--------+--------+
|   789|    null| History|
|   123|    Raja|Computer|
|   456|    Ravi| Economy|
+------+--------+--------+
What is Left outer join ?

This join is used to retrieve all the records from the left dataframe with its matching records from right dataframe .

The type of join is mentioned in either way as Left outer join or left join .

Sample program – Left outer join / Left join

In the below example , For the Emp_id : 234 , Dep_name is populated with null as there is no record for this Emp_id in the right dataframe .

# Left outer join / Left join <br />df3=df.join(df1,['Emp_id'], how = 'left_outer')
Print("Printing the result of Left outer join / Left join") 
 df3.show()
Printing the result of Left outer join / Left join
+------+--------+--------+
|Emp_id|Emp_name|Dep_name|
+------+--------+--------+
|   234|   Sindu|    null|
|   123|    Raja|Computer|
|   456|    Ravi| Economy|
+------+--------+--------+
What is Full outer join ?

Full outer join generate the result with all the records from both the dataframes . Null will populate in the columns for the unmatched records  .

Sample program – Full outer join / Full join / Outer join

All the Emp_ids from both the dataframes combined in this case with null population for unavailable values .

# Full outer join / Full join / Outer join
df4=df.join(df1,['Emp_id'], how = 'Full_outer')
print(Printing the result of Full outer join")
df4.show()
Printing the result of Full outer join
+------+--------+--------+
|Emp_id|Emp_name|Dep_name|
+------+--------+--------+
|   789|    null| History|
|   234|   Sindu|    null|
|   123|    Raja|Computer|
|   456|    Ravi| Economy|
+------+--------+--------+
Reference

https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.join.html?highlight=outer%20join

Categories
microservices spring-boot

connect MySQL database from spring boot

In this tutorial, we will learn to connect MySQL database from spring boot with Spring Data

What is Spring Data

It makes it easy to use data access technologies, relational and non-relational databases, map-reduce frameworks, and cloud-based data services. 

What You Will learn

End of this tutorial, you will learn to connect MySQL database from spring boot. You can execute SQL queries from Spring boot

Mysql Database Scripts
create database beginnersbug;
use beginnersbug;
CREATE TABLE students (
    id int NOT NULL,
	firstname varchar(255) NOT NULL,
    lastname varchar(255) NOT NULL,    
    department int,
    PRIMARY KEY (id)
);
Dependency
<!-- Added for Database connection -->
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>

<dependency>
	<groupId>mysql</groupId>
	<artifactId>mysql-connector-java</artifactId>
</dependency>
application.properties
# Add database url here
spring.datasource.url=jdbc:mysql://localhost:3306/beginnersbug
spring.datasource.username=root
spring.datasource.password=password
# Add driver class here 
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect

Below class the representation of Students table

Students.java
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;

@Entity
public class Students {

	@Id
	@GeneratedValue(strategy = GenerationType.AUTO)
	@Column(name = "id")
	private long id;

	@Column(name = "firstname")
	private String firstName;

	@Column(name = "lastname")
	private String lastName;

	@Column(name = "department")
	private String department;

	public long getId() {
		return id;
	}

	public void setId(long id) {
		this.id = id;
	}

	public String getFirstName() {
		return firstName;
	}

	public void setFirstName(String firstName) {
		this.firstName = firstName;
	}

	public String getLastName() {
		return lastName;
	}

	public void setLastName(String lastName) {
		this.lastName = lastName;
	}

	public String getDepartment() {
		return department;
	}

	public void setDepartment(String department) {
		this.department = department;
	}

}

Here we extend JpaRepository. which have predefined methods like findAll(),findById(),findAllById(),save(),delete() methods

StudentDao.java
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.stereotype.Repository;

import com.beginnersbug.student.model.Students;

@Repository
public interface StudentDao extends JpaRepository<Students, Long> {

}
StudentController.java

import java.util.List;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

import com.beginnersbug.student.dao.StudentDao;
import com.beginnersbug.student.model.Students;

// This annotation used to mention a class as Controller class
@RestController
public class StudentController {

	@Autowired
	StudentDao studentsDao;


	@GetMapping("")
	public List<Students> getStudentsList() {
		List<Students> findAll = studentsDao.findAll();
		return findAll;
	}

}
@Entity

Specifies that the class is an entity. It should be the replication of table

@Id

Specifies the primary key of an entity. The primary key of the table should be defined as @id

@GeneratedValue(strategy = GenerationType.AUTO)

This will specify the generation strategies for the values of primary keys while inserting a record in table.

@Column(name = “id”)

Specifies the mapped column for a persistent property or field. If no Column annotation is specified, the default values apply.

@Repository

This annotation will specify the interface as Dao Class

Time needed: 45 minutes

Steps

  1. Create Spring boot Project

    Follow this tutorial to create spring boot application
    https://beginnersbug.com/how-to-create-spring-boot-application/

  2. Create Database

    Open Mysql Database and create database using below command

    create database beginnersbug;

  3. Create Table

    Create Table with below command

    use beginnersbug;
    CREATE TABLE students (
    id int NOT NULL,
    firstname varchar(255) NOT NULL,
    lastname varchar(255) NOT NULL,
    department int,
    PRIMARY KEY (id)
    );

  4. Create Entity class for Students table

    Please refer above Students.java

  5. Create Dao Interface

    please refer above StudentDao.java

  6. Create Controller class

    Refer above StudentController.java

  7. Run

    Navigate to main class. Right Click and click on RunAs –>JavaApplication

  8. Testing

    Before testing please add some entry in students table
    Open Browser http://localhost:8080/ . You can see the result in the browser

Github

https://github.com/rkumar9090/student

Related Articles

how to create spring boot application

Categories
pyspark

Inner join in pyspark dataframe with example

In this post, We will learn about Inner join in pyspark dataframe with example. 

Types of join in pyspark dataframe

Before proceeding with the post, we will get familiar with the types of join available in pyspark dataframe.

Types  of join: inner join, cross join, outer join, full join, full_outer join, left join, left_outer join, right join, right_outer join, left_semi join, and left_anti join

What is Inner join ?

As similar to  SQL , Inner join helps us to get the matching records between two datasets . To understand it better , we will create two dataframes with the following piece of code .

Sample program for creating two dataframes
spark = SparkSession.builder.appName("Inner Join").getOrCreate()
from pyspark.sql import Row
# Creating dictionary with columns Emp_id and Emp_name
dict=[{"Emp_id" : 123 , "Emp_name" : "Raja" }, {"Emp_id" : 456 , "Emp_name" : "Ravi"}]
# Creating RDD from the above dictionary using parallelize method
rdd=sc.parallelize(dict)
# Converting RDD to dataframe 
df=rdd.toDF()
print("Printing the first dataframe df")
df.show()
Printing the first dataframe df 
+------+--------+
|Emp_id|Emp_name|
+------+--------+
|   123|    Raja|
|   456|    Ravi|
+------+--------+
# Creating dictionary with columns Emp_id and Dep_name 
dict1=[{"Emp_id" : 123 , "Dep_name" : "Computer" } , {"Emp_id" : 456 ,"Dep_name"  :"Economy"} , {"Emp_id" : 789 , "Dep_name" : "History"}]
# Creating RDD from the above dictionary using parallelize method
rdd1=sc.parallelize(dict1)
# Converting RDD to dataframe  
df1=rdd1.toDF()
print("Printing the second dataframe df1")
df1.show()
Printing the second dataframe df1
+--------+------+
|Dep_name|Emp_id|
+--------+------+
|Computer|   123|
| Economy|   456|
| History|   789|
+--------+------+
How to do inner join ?

The syntax of join requires three parameters to be passed –

1) The dataframe to be joined with

2) Column to be checked for

3) Type of join to be do . 

By default , Inner join will be taken for the third parameter if no input is passed .

First method

Let us see the first method in understanding Inner join in pyspark dataframe with example.

# Inner joining the two dataframes df and df1 based on the column Emp_id 
df2=df.join(df1,['Emp_id'], how = 'inner')
print("Printing the dataframe df2")
df2.show()
Printing the dataframe df2
+------+--------+--------+
|Emp_id|Emp_name|Dep_name|
+------+--------+--------+
|   123|    Raja|Computer|
|   456|    Ravi| Economy|
+------+--------+--------+
Second method
# Inner joining the two dataframes df and df1 based on the column Emp_id with default join i.e inner join
df3=df.join(df1,['Emp_id'])
print("Printing the dataframe df3")
df3.show()
Printing the dataframe df3
+------+--------+--------+
|Emp_id|Emp_name|Dep_name|
+------+--------+--------+
|   123|    Raja|Computer|
|   456|    Ravi| Economy|
+------+--------+--------+
Reference

https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=join#pyspark.sql.DataFrame.join

Categories
pyspark

Where condition in pyspark with example

In this post, we will understand the usage of where condition in pyspark with example.

Where condition in pyspark

This topic where condition in pyspark with example works in a similar manner as the where clause in SQL operation.

We cannot use the filter condition to filter null or non-null values. In that case, where condition helps us to deal with the null values also.

Sample program in pyspark

In the below sample program, the dictionary data1 created with key and value pairs and the dataframe df1 created with rows and columns. 

Using the createDataFrame method, the dictionary data1 converted to a dataframe df1.

Here , We can use isNull() or isNotNull() to filter the Null values or Non-Null values.

 spark = SparkSession.builder \
     .appName("Filtering Null records") \
     .getOrCreate()
# Creating dictionary
data1=[{"Name" : 'Usha', "Class" : 7, "Marks" : 250 }, \
{"Name" : 'Rajesh' , "Class" : 5, "Marks" : None }]
# Converting dictionary to dataframe
df1=spark.createDataFrame(data1)
df1.show()
# Filtering Null records 
df2=df1.where(df1["Marks"].isNull())
df2.show()
#Filtering Non-Null records
df3=df1.where(df1["Marks"].isNotNull())
df3.show()
Output

The dataframe df1 is created from the dictionary with one null record and one non-null record using the above sample program.

The dataframe df2 filters only the null records whereas the dataframe df3 filters the non-null records.

Other than filtering null and non-null values, we can even use the where() to filter based on any particular values.

Printing dataframe df1
+-----+-----+------+
|Class|Marks|  Name|
+-----+-----+------+
|    7|  250|  Usha|
|    5| null|Rajesh|
+-----+-----+------+
Printing dataframe df2
+-----+-----+------+
|Class|Marks|  Name|
+-----+-----+------+
|    5| null|Rajesh|
+-----+-----+------+
Printing dataframe df3
+-----+-----+----+
|Class|Marks|Name|
+-----+-----+----+
|    7|  250|Usha|
+-----+-----+----+
Reference

https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.DataFrame.filter

Please refer to the below link for understanding the filter condition in pyspark with example.

https://beginnersbug.com/how-to-use-filter-condition-in-pyspark/

Categories
microservices spring-boot

how to create rest service using spring boot

In this post, we will learn how to create rest service using spring boot

What is Rest Service

REST is Web services which is lightweight, maintainable, and scalable in nature.
The underlying protocol for REST is HTTP, which is the basic web protocol. REST stands for REpresentational State Transfer

What You Will learn

End of this tutorial, you will learn to create a spring boot application with Rest service

Rest Service Annotation
// This annotation used to mention a class as Controller class
@RestController
// Here we are using this to retrieve value
@GetMapping
application.properties
server.port=8080
Dependency
<parent>                                                                    
  <groupId>org.springframework.boot</groupId>                               
  <artifactId>spring-boot-starter-parent</artifactId>                       
  <version>2.2.6.RELEASE</version>                                          
  <relativePath /> <!-- lookup parent from repository -->                   
</parent>

<dependency>                                                                
  <groupId>org.springframework.boot</groupId>                               
  <artifactId>spring-boot-starter-web</artifactId>                          
</dependency>

<dependencyManagement>                                                      
  <dependencies>                                                            
    <dependency>                                                            
      <groupId>org.springframework.cloud</groupId>                          
      <artifactId>spring-cloud-dependencies</artifactId>                    
      <version>${spring-cloud.version}</version>                            
      <type>pom</type>                                                      
      <scope>import</scope>                                                 
    </dependency>                                                           
  </dependencies>                                                           
</dependencyManagement> 
Sample Code
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

// This annotation used to mention a class as Controller class
@RestController
public class StudentController {

	// To Retrieve we can use Get method
	@GetMapping
	public String getName() {
		return "BeginnersBug";
	}
}

In above example we are using a simple method to retrieve a hard coded value from spring boot rest service

@RestController

This annotation is the combination of @Controller and @ResponseBody. Which will mention particular class as a Controller class

@GetMapping

This annotation for mapping HTTP GET requests onto specific handler methods.

@PostMapping

This annotation for mapping HTTP POST requests onto specific handler methods.
Note : Here we didn’t used this annotation

Time needed: 45 minutes

Steps

  1. Create Spring boot Project

    Follow this tutorial to create spring boot application
    https://beginnersbug.com/how-to-create-spring-boot-application/

  2. Create Controller Class

    Once you create and import the Spring boot application in eclipse
    Create a Class, Here I created as StudentController.java

  3. Add @RestController annotation

    Add @RestController annotation to the class level as like in the above sample code

  4. Create a method to return String

    Here I created a simple method with return type as String
    public String getName() {return “BeginnersBug”;}

  5. Add @GetMapping annotation

    Add @GetMapping(“/name”) annotation to the method level as like in the above sample code

  6. Edit application.properties

    Navigate to application.properties under src/main/resources/
    add this server.port=8080

  7. Run

    Navigate to main class. Right Click and click on RunAs –>JavaApplication

  8. Testing

    Open Browser http://localhost:8080/name . You can see the BeginnersBug in the browser

Github

https://github.com/rkumar9090/student

Related Articles

how to create spring boot application

Categories
microservices spring-boot

how to create spring boot application

In this tutorial, we will learn how to create spring boot application

What is Spring boot

Spring Boot makes it easy to create stand-alone, production-grade application. It internally use Spring framework and have embedded tomcat servers, nor need to deploy war in any servers

Also easy to create,configure&run

What You Will learn

End of this tutorial, you will learn to create a spring boot application

Annotation
@SpringBootApplication
Dependency
<parent>                                                              
  <groupId>org.springframework.boot</groupId>                         
  <artifactId>spring-boot-starter-parent</artifactId>                 
  <version>2.2.6.RELEASE</version>                                    
  <relativePath /> <!-- lookup parent from repository -->             
</parent>

Time needed: 30 minutes

Steps

  1. Navigate to Spring initializr website

    Click on this url https://start.spring.io/register spring boot micro-services to eureka discovery server

  2. Choose project as Maven

    In this tutorial I am using Maven. You can use gradle or groovy also

  3. Choose language as Java

    Here I am using Java. But we have option for Kotlin & Groovy also

  4. Spring Boot Version

    Please select the stable version, I am using 2.2.6

  5. Enter Group,artifact,Name & description

    Enter the groupid,artifactid&name as you wanted

  6. Packaging as Jar

    Choose Jar, But you have option for war also

  7. Java Verison

    Here I am using java 8

  8. Add Spring Web as dependency

    In case you going you want to expose rest service, Click on the add dependency and select spring web.

  9. Click on the Genreate

    Once you filled all these details click on the Generate Button.
    It will download .rar format

  10. Extract the downloaded file

    Once your download complete, Extract the downloaded .rar file

  11. Import in Eclipse

    After Extracting, Import the project in eclipse

  12. Navigate to Main Class

    Navigate to Main class which will be under src/main/java
    and Make sure the class have @SpringBootApplication annotation

  13. Run

    Right click the class and choose the Run As –> Java Application

  14. Verify logs

    Once your application starts you can see below logs in console

Tomcat started on port(s): 8080 (http) with context path ''
Updating port to 8080
Started StudentApplication in 22.834 seconds (JVM running for 24.503)
Related Articles

create netflix eureka discovery server using spring boot

register spring boot micro-services to eureka discovery

Categories
microservices spring-boot

register spring boot micro-services to eureka discovery

This tutorial is a continuation of our previous tutorial, Here we will learn about register spring boot micro-services to eureka discovery

If you want to learn about eureka please refer my previous post https://beginnersbug.com/create-netflix-eureka-discovery-server-using-spring-boot/

Prerequisite
  • JDK
  • Eclipse
  • Spring boot micro service knowledge
  • Netflix Eureka knowledge
Annotation used to register
@EnableDiscoveryClient
Dependency
<parent>                                                                
  <groupId>org.springframework.boot</groupId>                           
  <artifactId>spring-boot-starter-parent</artifactId>                   
  <version>2.2.6.RELEASE</version>                                      
  <relativePath /> <!-- lookup parent from repository -->               
</parent> 

<dependency>                                                            
  <groupId>org.springframework.cloud</groupId>                          
  <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>   
</dependency> 

<dependency>                                                            
  <groupId>org.springframework.boot</groupId>                           
  <artifactId>spring-boot-starter-web</artifactId>                      
</dependency>

<dependencyManagement>                                                  
  <dependencies>                                                        
    <dependency>                                                        
      <groupId>org.springframework.cloud</groupId>                      
      <artifactId>spring-cloud-dependencies</artifactId>                
      <version>${spring-cloud.version}</version>                        
      <type>pom</type>                                                  
      <scope>import</scope>                                             
    </dependency>                                                       
  </dependencies>                                                       
</dependencyManagement> 
application.properties
server.port=8080
spring.application.name=student
eureka.client.registerWithEureka=true
eureka.client.fetchRegistry=true
eureka.client.serviceUrl.defaultZone=http://localhost:8761/eureka
Main Class
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;

@SpringBootApplication
@EnableDiscoveryClient
public class StudentApplication {

	public static void main(String[] args) {
		SpringApplication.run(StudentApplication.class, args);
	}
}
Register spring boot

Time needed: 30 minutes

Steps

  1. Create a spring boot application

    Use https://start.spring.io/ to create spring boot applicationregister spring boot micro-services to eureka discovery server

  2. Add Dependency

    Add eureka discovery client and spring web dependency in the text box

  3. Click on the generate

    Once you clicked Generate you project will download

  4. Import into your IDE

    Import the downloaded project into your IDE. Here I used eclipse IDE for my development

  5. Add @EnableDiscoveryClient annotation

    Once you imported you project in IDE, Go to the Main class and add @EnableDiscoveryClient annotation.
    This annotation will register spring boot micro-services to eureka discovery

  6. change application.properties

    eureka.client.registerWithEureka=true
    eureka.client.fetchRegistry=true
    eureka.client.serviceUrl.defaultZone=http://localhost:8761/eureka

  7. Build and Run

    Now you can run your application

  8. Testing

    Open browser and navigate to http://localhost:8761/

  9. You can see your student micro service under Instances

Exception

In case if you didn’t configure the properties properly, you might get below exception.

TransportException: Cannot execute request on any known server
Solution

Confirm below property in your application.properties

eureka<span class="token punctuation">.</span>client<span class="token punctuation">.</span>serviceUrl<span class="token punctuation">.</span>defaultZone<span class="token operator">=</span>http<span class="token operator">:</span><span class="token operator">/</span><span class="token operator">/</span>localhost<span class="token operator">:</span><span class="token number">8761</span><span class="token operator">/</span>eureka

Make sure your discovery server is running on http://localhost:8761

Github

https://github.com/rkumar9090/student

Related Articles

create netflix eureka discovery server using spring boot