Create a simple helper parsing function that you can use inline in your select statement.
CREATE FUNCTION dbo.udf_Parse(
@InputString VARCHAR(1000)
,@Separator CHAR(1)
,@PositionInString TINYINT
)
RETURNS VARCHAR(250)
AS
BEGIN
DECLARE @OutputString VARCHAR(250)
DECLARE @StringList TABLE (
ID INT IDENTITY(1,1)
,VAL VARCHAR(250)
)
INSERT INTO @StringList (VAL)
SELECT value
FROM STRING_SPLIT(@InputString,@Separator)
SELECT @OutputString = VAL
FROM @StringList
WHERE ID=@PositionInString
RETURN @OutputString
END
GO
Then, you can use it like this:
-- create test data
IF OBJECT_ID('tempdb..#tmpOriginal','U') IS NOT NULL
DROP TABLE #tmpOriginal
CREATE TABLE #tmpOriginal (
email_column VARCHAR(50)
)
INSERT INTO #tmpOriginal(email_column)
VALUES ('test@gmail.com,test1@gmail.com')
,('foo@email.com,bar@email.com')
-- (end test data)
-- how to use:
-- (you can put this in your stored procedure if you want)
SELECT
dbo.udf_Parse(email_column,',',1) AS Email1
,dbo.udf_Parse(email_column,',',2) AS Email2
,dbo.udf_Parse(email_column,',',3) AS Email3
FROM #tmpOriginal
-- cleanup test data
DROP TABLE #tmpOriginal
Produces:
Email1 | Email2 | Email3
-----------------------------------------
test@gmail.com | test1@gmail.com | NULL
foo@email.com | bar@email.com | NULL
With all that said, having CSV in a single column is a problematic design. And, using this parsing function is an expensive process that will be slow on large data-sets. However, with this function, you can use as is or to parse and migrate into a better schema.