Available for Enterprise Edition only.
Create the Livy fabric
Let’s get started with creating Livy fabric.- Click on the Create Entity button from the left navigation bar.
- Click on the Fabric tile.
- Fill out the basic information and click Continue.
-
Fill in the information about the Spark provider.
- Provider Type: Spark
- Provider: Livy
- Livy URL: The URL to your Livy environment
- Use mTLS Encryption (optional): The Client Certificate and Client Key required for mTLS.
- Authentication type: How you will authenticate Livy in Prophecy
- Test the connection to validate the fabric.
Authentication types
Prophecy supports the following authentication types for Livy.- None: This applies when Livy and Prophecy are on the same private network or when Prophecy can securely communicate with Livy through IP whitelisting.
- Bearer Token: Use this option to authenticate using a secure token-based system. All team members who use this fabric will use the same token configured by a team admin here for authentication.
- Kerberos: If using a Kerberized Hadoop cluster, you can authenticate via Kerberos. For this option, Prophecy cluster admins must first add Keytab files in Settings > Admin > Security. You can also enable the impersonate using Proxy-user toggle to allow user-level authorization. Prophecy cluster admins can configure the proxy-user settings in Settings > Admin > Security.
Additional Livy configurations
Once the connection is validated:-
Edit or add job sizes. A job size consists of the following.
- Name: The name of the job size.
- Drivers: The number of cores and amount of memory of the drivers
- Executors: The number of cores and the amount of memory of the executors, and the total number of executors
- (Optional) Spark Config: The Spark configuration parameters at the job size level. These settings control cluster behavior when using this job size. If the same config key is also defined at the fabric level, the value set at the job size level will override it.
-
Configure the Prophecy Library settings.
- Spark version: Version that is used when a user attaches to a cluster using this fabric
- Scala version: Version that is used when a user attaches to a cluster using this fabric
- Scala/Python resolution mode: Location of libraries to use for Scala or Python

