Categories
spring-boot

no main manifest attribute, in – spring boot

In this post, we will learn to resolve no main manifest attribute, in – spring boot application

Once your jar is build using maven command, we will attempt to run the jar

This exception will occur if you didn’t add maven build plugin in your pom.xml file

Solution

Add below maven build plugin in your pom.xml

<plugin>
    <groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-maven-plugin</artifactId>
	<executions>
		<execution>
			<goals>
			<goal>repackage</goal>
			</goals>
	    </execution>
	</executions>
</plugin>

After adding above plugin in your pom.xml, Build your spring boot project using maven command mvn clean install

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>2.3.0.RELEASE</version>
		<relativePath /> <!-- lookup parent from repository -->
	</parent>
	<groupId>com.example</groupId>
	<artifactId>demo</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<name>demo</name>
	<description>Demo project for Spring Boot</description>

	<properties>
		<java.version>1.8</java.version>
	</properties>

	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-web</artifactId>
		</dependency>

		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>test</scope>
			<exclusions>
				<exclusion>
					<groupId>org.junit.vintage</groupId>
					<artifactId>junit-vintage-engine</artifactId>
				</exclusion>
			</exclusions>
		</dependency>
	</dependencies>

	<build>
		<plugins>
			<plugin>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-maven-plugin</artifactId>
				<executions>
					<execution>
						<goals>
							<goal>repackage</goal>
						</goals>
					</execution>
				</executions>
			</plugin>
		</plugins>
	</build>
</project>

After adding maven build plugin, you can build and run your spring boot application successfully.

Categories
java spring-boot

configure datasource programmatically in spring boot

In this post, we will learn about configure datasource programmatically in spring boot

In below example we are using mysql as database. We are reading database properties from application.properties using @ConfigurationProperties

Using DataSourceBuilder
    @ConfigurationProperties(prefix = "datasource.custom")
	@Bean
	@Primary
	public DataSource dataSource() {
		return DataSourceBuilder.create().build();
	}

In the above snippet spring boot will create datasource with values from application.properties

datasource.custom.jdbcUrl=jdbc:mysql://localhost:3306/beginnersbug
datasource.custom.username=root
datasource.custom.password=password
datasource.custom.driverClassName=com.mysql.jdbc.Driver
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect

make sure your prefix correct matching text. In this example I am using datasource.custom as prefix

CREATE TABLE students (
    id int NOT NULL,
	firstname varchar(255) NOT NULL,
    lastname varchar(255) NOT NULL,    
    department int,
    PRIMARY KEY (id)
);
dependency
<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-data-jpa</artifactId>
		</dependency>
<dependency>
			<groupId>mysql</groupId>
			<artifactId>mysql-connector-java</artifactId>
			<scope>runtime</scope>
		</dependency>

Configuration Class

Your configuration class should annotated with @configuration

import javax.sql.DataSource;

import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.jdbc.DataSourceBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;

@Configuration
public class DatasourceConfig {

	@ConfigurationProperties(prefix = "datasource.custom")
	@Bean
	@Primary
	public DataSource dataSource() {
		return DataSourceBuilder.create().build();
	}

}

In this program we are connecting to mysql database. Below is DaoInterface where I am using JpaRepository for CRUD operations

import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.stereotype.Repository;

import com.beginnersbug.studentservice.model.Student;

@Repository
public interface StudentDao extends JpaRepository<Student, Long> {

}

From Controller class we are invoking DAO interface to retrieve strudents table data


import java.util.List;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

import com.beginnersbug.datasource.StudentDao;
import com.beginnersbug.datasource.model.Student;

@RestController()
@RequestMapping("/api/student")
public class StudentController {

	@Autowired
	StudentDao studentsDao;

	@RequestMapping(method = RequestMethod.GET)
	public List<Student> getStudentsList() {
		return studentsDao.findAll();
	}

}
Testing

Hit http://localhost:8080/api/student from Chrome brower, It will return all the values from the students table

Github

https://github.com/rkumar9090/datasource

Related Articles

connect MySQL database from spring boot

Categories
java

com.sun.net.httpserver.httpexchange not found in eclipse

In this post we will learn about com.sun.net.httpserver.httpexchange not found in eclipse

While I am trying to add Http Server in Core java application I faced below exception

import com.sun.net.httpserver.Headers;
import com.sun.net.httpserver.HttpExchange;
import com.sun.net.httpserver.HttpHandler;
import com.sun.net.httpserver.HttpServer;

If you are not to able to add above dependencies in java code, please verify below points

  • You should use java version above 1.6
  • Make sure you have configured correct JDK in your build path

If above things are correct still you are facing the issue means you need to do below thing in eclipse

you need to add com/sun/net/httpserver/ package in the access rule

  • Open Eclipse
  • Navigate to java build path
  • Click on the access rule under JRE System Libray
  • Click on the edit button and add an access rule like below image
Reference

https://stackoverflow.com/questions/13155734/eclipse-cant-recognize-com-sun-net-httpserver-httpserver-package

Related Articles

add rest service in core java application

Categories
java

add rest service in core java application

In this post, we will learn about add rest service in core java application

We are in spring boot world, where exposing a http endpoint is much easier, but when it comes to a standalone java application we will get confused

Java has a feature to expose Http endpoint from a standalone application.
Using import com.sun.net.httpserver.HttpExchange;

In below example, we are exposing an Http endpoint on port number 8080 with endpoint /health

Example

import java.io.IOException;
import java.io.OutputStream;
import java.net.InetSocketAddress;

import com.sun.net.httpserver.HttpExchange;
import com.sun.net.httpserver.HttpHandler;
import com.sun.net.httpserver.HttpServer;

public class HttpServerExample {

	public static void main(String[] args) throws Exception {
		try {
			HttpServer server = HttpServer.create(new InetSocketAddress(8080), 0);
			server.createContext("/health", new Handler());
			server.setExecutor(null);
			server.start();
		} catch (Exception e) {
			e.printStackTrace();
		}

	}

	static class Handler implements HttpHandler {
		@Override
		public void handle(HttpExchange t) throws IOException {
			String response = "Up & Running";
			t.sendResponseHeaders(200, response.length());
			OutputStream os = t.getResponseBody();
			os.write(response.getBytes());
			os.close();
		}
	}
}
Output

http://localhost:8080/health

Up & Running

If you are not found com.net.sun.http package in your eclipse follow this URL https://beginnersbug.com/com-sun-net-httpserver-httpexchange-not-found-in-eclipse

Github

https://github.com/rkumar9090/BeginnersBug/blob/master/BegineersBug/src/com/geeks/example/HttpServerExample.java

Categories
collections java

convert iterator to list in java

In this tutorial, we will learn convert iterator to list in java

Here we are giving two approach to convert iterator to list in java

Syntax in Java 8
ArrayList<String> list = new ArrayList<String>();
iterator.forEachRemaining(list::add);
Syntax in Java 7
ArrayList<String> list = new ArrayList<String>();
while (iterator.hasNext()) {
  String string = (String) iterator.next();
  list.add(string);
}
Example
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Iterator;

public class IteratorToList {

	public static void main(String[] args) {

		Iterator<String> iterator = Arrays.asList("Rajesh", "Kumar", "Beginners", "Bug").iterator();
		convertToListJava7(iterator);
		convertToListJava8(iterator);

	}

	/**
	 * Java 7 Approach
	 * 
	 * @param iterator
	 * @return
	 */
	public static ArrayList<String> convertToListJava7(Iterator<String> iterator) {
		ArrayList<String> list = new ArrayList<String>();
		while (iterator.hasNext()) {
			String string = (String) iterator.next();
			list.add(string);
		}

		return list;
	}

	/**
	 * Java 8 Apporach
	 * 
	 * @param iterator
	 * @return
	 */
	public static ArrayList<String> convertToListJava8(Iterator<String> iterator) {
		ArrayList<String> list = new ArrayList<String>();
		iterator.forEachRemaining(list::add);
		return list;
	}

}
Github

https://github.com/rkumar9090/BeginnersBug/blob/master/BegineersBug/src/com/geeks/example/list/IteratorToList.java

Related Articles

Iterate list using streams in java

Categories
mysql

third highest salary for each department in a table using MySQL

In this post, let us learn to get the third highest salary for each department in a table using MySQL.

MySQL table creation

In order to find the third highest salary for each department in a table using MySQL, we will create a table like below.

The following table contains the salary details of a few employees across different departments.

Create Table Query
create table employee (Dep_name varchar(20),Emp_id int,Salary int)
Insert Query
insert into Employee values('Computer',564,1400);
insert into Employee values('Computer',123,2500);
insert into Employee values('Computer',943,3200);
insert into Employee values('History',987,3450);
insert into Employee values('Economy',456,4500);
insert into Employee values('Economy',678,6700);
insert into Employee values('Economy',789,7200);
Table Data
mysql> select * from Employee;
+----------+--------+--------+
| Dep_name | Emp_id | Salary |
+----------+--------+--------+
| Computer |    564 |   1400 |
| Computer |    123 |   2500 |
| Computer |    943 |   3200 |
| History  |    987 |   3450 |
| Economy  |    456 |   4500 |
| Economy  |    678 |   6700 |
| Economy  |    789 |   7200 |
| Computer |    324 |   2500 |
+----------+--------+--------+
8 rows in set (0.00 sec)
nth Salary calculation for each group

We shall find the nth highest salary with the help of dense rank funtion available in MySQL.Here comes the syntax and usage of it.

Syntax :

DENSE_RANK() OVER (PARTITION BY <columnname> ORDER BY <columnname> desc)

The dense_rank helps us in ranking the records over each partition. To get to know about rank and dense rank well, Please refer to the below link.

https://beginnersbug.com/rank-and-dense-rank-in-pyspark-dataframe/

The partition by clause divides the entire data into groups based on the column specified. 

The order by clause sorts the column within each group whose nth calculation needs to be performed.

Calculating dense rank

For the employee table, the entire data gets divided based on the Dep_name column and ordered by the salary column.

The dense rank function will be applied to each partitioned data to calculate the highest salary.

select Dep_name,Emp_id,Salary,DENSE_RANK() OVER (PARTITION BY Dep_name ORDER BY Salary desc) as denserank from employee;
+----------+--------+--------+-----------+
| Dep_name | Emp_id | Salary | denserank |
+----------+--------+--------+-----------+
| Computer |    943 |   3200 |         1 |
| Computer |    123 |   2500 |         2 |
| Computer |    324 |   2500 |         2 |
| Computer |    564 |   1400 |         3 |
| Economy  |    789 |   7200 |         1 |
| Economy  |    678 |   6700 |         2 |
| Economy  |    456 |   4500 |         3 |
| History  |    987 |   3450 |         1 |
+----------+--------+--------+-----------+
8 rows in set (0.00 sec)
Third highest salary for each department

With the calculated dense rank value for each department, we could filter the third dense rank to get the third highest salary.

select a.Dep_name,a.Emp_id,a.Salary from (select Dep_name,Emp_id,Salary,DENSE_RANK() OVER (PARTITION BY Dep_name ORDER BY Salary desc) as denserank from employee) a where a.denserank=3;
+----------+--------+--------+
| Dep_name | Emp_id | Salary |
+----------+--------+--------+
| Computer |    564 |   1400 |
| Economy  |    456 |   4500 |
+----------+--------+--------+
Reference

https://www.mysqltutorial.org/mysql-window-functions/mysql-dense_rank-function/

Categories
pyspark

rank and dense rank in pyspark dataframe

In this post, Let us know rank and dense rank in pyspark dataframe using window function with examples.

Rank and dense rank

The rank and dense rank in pyspark dataframe help us to rank the records based on a particular column.

This works in a similar manner as the row number function .To understand the row number function in better, please refer below link.

The row number function will work well on the columns having non-unique values . Whereas rank and dense rank help us to deal with the unique values.

Sample program – creating dataframe

We could create the dataframe containing the salary details of some employees from different departments using the below program.

from pyspark.sql import Row
# Creating dictionary with employee and their salary details 
dict1=[{"Emp_id" : 123 , "Dep_name" : "Computer"  , "Salary" : 2500 } , {"Emp_id" : 456 ,"Dep_name"  :"Economy" , "Salary" : 4500} , {"Emp_id" : 789 , "Dep_name" : "Economy" , "Salary" : 7200 } , {"Emp_id" : 564 , "Dep_name" : "Computer" , "Salary" : 1400 } , {"Emp_id" : 987 , "Dep_name" : "History" , "Salary" : 3450 }, {"Emp_id" :678 , "Dep_name" :"Economy" ,"Salary": 4500},{"Emp_id" : 943 , "Dep_name" : "Computer" , "Salary" : 3200 }]
# Creating RDD from the dictionary created above
rdd1=sc.parallelize(dict1)
# Converting RDD to dataframe
df1=rdd1.toDF()
print("Printing the dataframe df1")
df1.show()
Printing the dataframe df1
+--------+------+------+
|Dep_name|Emp_id|Salary|
+--------+------+------+
|Computer|   123|  2500|
| Economy|   456|  4500|
| Economy|   789|  7200|
|Computer|   564|  1400|
| History|   987|  3450|
| Economy|   678|  4500|
|Computer|   943|  3200|
+--------+------+------+
Sample program – rank()

In order to use the rank and dense rank in our program, we require below libraries.

from pyspark.sql import Window
from pyspark.sql.functions import rank,dense_rank

from pyspark.sql import Window
from pyspark.sql.functions import rank
df2=df1.withColumn("rank",rank().over(Window.partitionBy("Dep_name").orderBy("Salary")))
print("Printing the dataframe df2")
df2.show()

In the below output, the department economy contains two employees with the first rank. This is because of the same salary being provided for both employees.

But instead of assigning the next salary with the second rank, it is assigned with the third rank. This is how the rank function will work by skipping the ranking order.

Printing the dataframe df2
+--------+------+------+----+
|Dep_name|Emp_id|Salary|rank|
+--------+------+------+----+
|Computer|   564|  1400|   1|
|Computer|   123|  2500|   2|
|Computer|   943|  3200|   3|
| History|   987|  3450|   1|
| Economy|   456|  4500|   1|
| Economy|   678|  4500|   1|
| Economy|   789|  7200|   3|
+--------+------+------+----+
Sample program – dense rank()

In the dense rank, we can skip the ranking order . For the same scenario discussed earlier, the second rank is assigned in this case instead of skipping the sequence order. 

from pyspark.sql import Window
from pyspark.sql.functions import dense_rank
df3=df1.withColumn("denserank",dense_rank().over(Window.partitionBy("Dep_name").orderBy("Salary")))
print("Printing the dataframe df3")
df3.show()
Printing the dataframe df3
+--------+------+------+---------+
|Dep_name|Emp_id|Salary|denserank|
+--------+------+------+---------+
|Computer|   564|  1400|        1|
|Computer|   123|  2500|        2|
|Computer|   943|  3200|        3|
| History|   987|  3450|        1|
| Economy|   456|  4500|        1|
| Economy|   678|  4500|        1|
| Economy|   789|  7200|        2|
+--------+------+------+---------+
Reference

http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=window#pyspark.sql.Column.over

Categories
pyspark

row_number in pyspark dataframe

In this post, we will learn to use row_number in pyspark dataframe with examples.

What is row_number ?

This row_number in pyspark dataframe will assign consecutive numbering over a set of rows.
The window function in pyspark dataframe helps us to achieve it.
To get to know more about window function, Please refer to the below link.

Creating dataframe 

Before moving into the concept, Let us create a dataframe using the below program.

from pyspark.sql import Row
# Creating dictionary with employee and their salary details 
dict1=[{"Emp_id" : 123 , "Dep_name" : "Computer"  , "Salary" : 2500 } , {"Emp_id" : 456 ,"Dep_name"  :"Economy" , "Salary" : 4500} , {"Emp_id" : 789 , "Dep_name" : "Economy" , "Salary" : 7200 } , {"Emp_id" : 564 , "Dep_name" : "Computer" , "Salary" : 1400 } , {"Emp_id" : 987 , "Dep_name" : "History" , "Salary" : 3450 }, {"Emp_id" :678 , "Dep_name" :"Economy" ,"Salary": 6700},{"Emp_id" : 943 , "Dep_name" : "Computer" , "Salary" : 3200 }]
# Creating RDD from the dictionary created above
rdd1=sc.parallelize(dict1)
# Converting RDD to dataframe
df1=rdd1.toDF()
print("Printing the dataframe df1")
df1.show()

Thus we created the below dataframe with the salary details of some employees from various departments.

Printing the dataframe df1
+--------+------+------+
|Dep_name|Emp_id|Salary|
+--------+------+------+
|Computer|   123|  2500|
| Economy|   456|  4500|
| Economy|   789|  7200|
|Computer|   564|  1400|
| History|   987|  3450|
| Economy|   678|  6700|
|Computer|   943|  3200|
+--------+------+------+
Sample program – row_number

With the below segment of the code, we can populate the row number based on the Salary for each department separately.

We need to import the following libraries before using the window and row_number in the code.

orderBy clause is used for sorting the values before generating the row number.

from pyspark.sql import Window
from pyspark.sql.functions import row_number
df2=df1.withColumn("row_num",row_number().over(Window.partitionBy("Dep_name").orderBy("Salary")))
print("Printing the dataframe df2")
df2.show()
Printing the dataframe df2
+--------+------+------+-------+
|Dep_name|Emp_id|Salary|row_num|
+--------+------+------+-------+
|Computer|   564|  1400|      1|
|Computer|   123|  2500|      2|
|Computer|   943|  3200|      3|
| History|   987|  3450|      1|
| Economy|   456|  4500|      1|
| Economy|   678|  6700|      2|
| Economy|   789|  7200|      3|
+--------+------+------+-------+
Reference

https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.row_number

Categories
spring-boot

add swagger in spring boot application

In this tutorial, we will learn to add swagger in spring boot application

What is Swagger ?

Swagger is an open-source software framework backed by a large ecosystem of tools that helps developers design, build, document, and consume RESTful web services.

How to add in Spring boot

It is easy to integrate with spring boot. with help of few dependencies and some configuration we can easily integrate with spring boot

Depedency
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger2</artifactId>
	<version>2.6.1</version>	
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger-ui</artifactId>
	<version>2.6.1</version>
</dependency>
Annotation
@EnableSwagger2
Bean
@Bean
	public Docket api() {
		return new Docket(DocumentationType.SWAGGER_2).select().apis(RequestHandlerSelectors.any())
				.paths(PathSelectors.any()).build();
	}
Main Class

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;

import springfox.documentation.builders.PathSelectors;
import springfox.documentation.builders.RequestHandlerSelectors;
import springfox.documentation.spi.DocumentationType;
import springfox.documentation.spring.web.plugins.Docket;
import springfox.documentation.swagger2.annotations.EnableSwagger2;

@SpringBootApplication
@EnableSwagger2
public class StudentServiceApplication {

	public static void main(String[] args) {
		SpringApplication.run(StudentServiceApplication.class, args);
	}

	@Bean
	public Docket api() {
		return new Docket(DocumentationType.SWAGGER_2).select().apis(RequestHandlerSelectors.any())
				.paths(PathSelectors.any()).build();
	}
}

Time needed: 10 minutes

Steps

  1. dependency in pom.xml

    add above dependency in pom.xml

  2. Annotation in Configuration file

    add above annotation in a configuration file

  3. Bean method

    add above bean

  4. Testing

    Open http://localhost:8080/swagger-ui.html in browserswagger

Conclusion

With the help of two dependencies and one bean method we can easily add swagger in spring boot application

Related Articles

crud operations in spring boot with Mysql

Categories
spring-boot

Field ‘id’ doesn’t have a default value

Field ‘id’ doesn’t have a default value: You will face this exception when you not properly configured your model class or table

Exception
java.sql.SQLException: Field ‘id’ doesn’t have a default value
Solution
  • Make sure your table has a primary key and Auto_Increment property
  • In the case of Oracle database, your table should have Sequence
  • Your model class should have below properties
	@Id
	@GeneratedValue(strategy = GenerationType.IDENTITY)
	private Long id;

To learn more about Spring boot database operations use this link
https://beginnersbug.com/crud-operations-in-spring-boot-with-mysql/

References

https://stackoverflow.com/questions/804514/hibernate-field-id-doesnt-have-a-default-value