Adds extra columns to the DataFrame.
Calculates basic statistics.
Caches each DataFrame that arrives to this block.
Reads rows from an Apache Cassandra table.
Writes rows into a Cassandra table.
Calculates column-based statistics using MLLib library.
Computes the correlation between data series using MLLib library.
Reads a CSV file and generates a spark data frame.
Creates data frames from static data grid.
Prints out the data frame data to the standard output.
Filters the data frame based on a combination of boolean conditions against fields.
Calculates new fields based on string expressions in various dialects.
Finds the intersection of the two DataRow RDDs.
Finds the intersection of the two DataRow RDDs. They must have idential metadata.
Reads data from a JDBC source.
Writes data into a database table over JDBC.
Performs join of the two data frames.
Performs join of the two data frames. In row conditions, if there is ambiguity in a field's name, use "input0" and "input1" prefixes for the first and second input respectively.
Reads a JSON file, which contains a separate JSON object in each line.
Reads documents from MongoDB.
Writes documents into a MongoDB collection.
Performs reduceByKey() function by grouping the rows by the selected key first, and then applying a list of reduce functions to the specified data columns.
Computes the regression using MLLib library.
Repartitions the underlying Spark RDD.
HTTP REST Client, executes one request per row.
RX Transformer built on top of an Ignition FrameTransformer class, works as a bridge between ignition transformers and RX transformers.
Executes an SQL statement against the inputs.
Executes an SQL statement against the inputs. Each input is injected as a table under the name "inputX" where X is the index of the input.
Modifies, deletes, retains columns in the data rows.
Sets or drops the ignition runtime variables.
Reads the text file into a data frame with a single column.
Reads the text file into a data frame with a single column. If the separator is specified, splits the file into multiple rows, otherwise the data frame will contain only one row.
Writes rows to a CSV file.
Reads a folder of text files.
Merges multiple DataFrames.
Merges multiple DataFrames. All of them must have identical schema.
Factory for AddFields instances.
Factory for BasicStats instances.
Factory for Cache instances.
Factory for CassandraInput instances.
Factory for CassandraOutput instances.
Factory for ColumnStats instances.
Factory for Correlation instances.
Factory for CsvFileInput instances.
Factory for DataGrid instances.
Factory for DebugOutput instances.
Factory for Filter instances.
Factory for Formula instances.
Factory for Intersection instances.
Factory for JdbcInput instances.
Factory for JdbcOutput instances.
Factory for Join instances.
Factory for JsonFileInput instances.
Factory for MongoInput instances.
Factory for MongoOutput instances.
Factory for Reduce instances.
Factory for Regression instances.
Factory for Repartition instances.
Factory for RestClient instances.
Factory for SQLQuery instances.
Factory for SelectValues instance.
Factory for SetVariables instances.
Spark RX blocks.
Factory for TextFileInput instances.
Factory for TextFileOutput instances.
Factory for TextFolderInput instances.
Factory for Union instances.
Converts a Spark DataFrame into a tabledata Map.
Extracts a string from JSON and splits into a list of strings using the separator.
Extracts a string from JSON and splits into a list of strings using the separator.
Extracts a list of StructFields
elements from JSON.
Parses the string value using the supplied type name.
Creates a producer, wrapper for the data frame.
Converts an RDD[Row] into a tabledata Map.